00:00:00.001 Started by upstream project "autotest-nightly" build number 4228 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3591 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.139 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.140 The recommended git tool is: git 00:00:00.140 using credential 00000000-0000-0000-0000-000000000002 00:00:00.143 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.158 Fetching changes from the remote Git repository 00:00:00.163 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.191 Using shallow fetch with depth 1 00:00:00.191 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.191 > git --version # timeout=10 00:00:00.224 > git --version # 'git version 2.39.2' 00:00:00.224 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.258 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.258 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.102 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.112 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.122 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:06.122 > git config core.sparsecheckout # timeout=10 00:00:06.132 > git read-tree -mu HEAD # timeout=10 00:00:06.146 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:06.165 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:06.165 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:06.272 [Pipeline] Start of Pipeline 00:00:06.287 [Pipeline] library 00:00:06.288 Loading library shm_lib@master 00:00:06.289 Library shm_lib@master is cached. Copying from home. 00:00:06.307 [Pipeline] node 00:00:06.321 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.323 [Pipeline] { 00:00:06.334 [Pipeline] catchError 00:00:06.336 [Pipeline] { 00:00:06.348 [Pipeline] wrap 00:00:06.355 [Pipeline] { 00:00:06.361 [Pipeline] stage 00:00:06.362 [Pipeline] { (Prologue) 00:00:06.377 [Pipeline] echo 00:00:06.378 Node: VM-host-SM38 00:00:06.384 [Pipeline] cleanWs 00:00:06.394 [WS-CLEANUP] Deleting project workspace... 00:00:06.394 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.401 [WS-CLEANUP] done 00:00:06.699 [Pipeline] setCustomBuildProperty 00:00:06.798 [Pipeline] httpRequest 00:00:07.163 [Pipeline] echo 00:00:07.164 Sorcerer 10.211.164.101 is alive 00:00:07.171 [Pipeline] retry 00:00:07.173 [Pipeline] { 00:00:07.184 [Pipeline] httpRequest 00:00:07.188 HttpMethod: GET 00:00:07.188 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:07.188 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:07.198 Response Code: HTTP/1.1 200 OK 00:00:07.199 Success: Status code 200 is in the accepted range: 200,404 00:00:07.199 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:08.838 [Pipeline] } 00:00:08.853 [Pipeline] // retry 00:00:08.860 [Pipeline] sh 00:00:09.144 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:09.162 [Pipeline] httpRequest 00:00:09.838 [Pipeline] echo 00:00:09.840 Sorcerer 10.211.164.101 is alive 00:00:09.851 [Pipeline] retry 00:00:09.854 [Pipeline] { 00:00:09.871 [Pipeline] httpRequest 00:00:09.877 HttpMethod: GET 00:00:09.877 URL: http://10.211.164.101/packages/spdk_12fc2abf1e54ef44d6ae9091ab879722d4e15e60.tar.gz 00:00:09.878 Sending request to url: http://10.211.164.101/packages/spdk_12fc2abf1e54ef44d6ae9091ab879722d4e15e60.tar.gz 00:00:09.897 Response Code: HTTP/1.1 200 OK 00:00:09.897 Success: Status code 200 is in the accepted range: 200,404 00:00:09.898 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_12fc2abf1e54ef44d6ae9091ab879722d4e15e60.tar.gz 00:00:39.373 [Pipeline] } 00:00:39.392 [Pipeline] // retry 00:00:39.399 [Pipeline] sh 00:00:39.686 + tar --no-same-owner -xf spdk_12fc2abf1e54ef44d6ae9091ab879722d4e15e60.tar.gz 00:00:42.238 [Pipeline] sh 00:00:42.525 + git -C spdk log --oneline -n5 00:00:42.525 12fc2abf1 test: Remove autopackage.sh 00:00:42.525 83ba90867 fio/bdev: fix typo in README 00:00:42.525 45379ed84 module/compress: Cleanup vol data, when claim fails 00:00:42.525 0afe95a3a bdev/nvme: use bdev_nvme linker script 00:00:42.525 1cbacb58f test/nvmf: Clarify comment about lack of support for iWARP in tests 00:00:42.545 [Pipeline] writeFile 00:00:42.560 [Pipeline] sh 00:00:42.847 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:42.860 [Pipeline] sh 00:00:43.159 + cat autorun-spdk.conf 00:00:43.159 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:43.159 SPDK_TEST_NVME=1 00:00:43.159 SPDK_TEST_FTL=1 00:00:43.159 SPDK_TEST_ISAL=1 00:00:43.159 SPDK_RUN_ASAN=1 00:00:43.159 SPDK_RUN_UBSAN=1 00:00:43.159 SPDK_TEST_XNVME=1 00:00:43.159 SPDK_TEST_NVME_FDP=1 00:00:43.159 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:43.168 RUN_NIGHTLY=1 00:00:43.169 [Pipeline] } 00:00:43.184 [Pipeline] // stage 00:00:43.199 [Pipeline] stage 00:00:43.201 [Pipeline] { (Run VM) 00:00:43.213 [Pipeline] sh 00:00:43.492 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:43.492 + echo 'Start stage prepare_nvme.sh' 00:00:43.492 Start stage prepare_nvme.sh 00:00:43.492 + [[ -n 9 ]] 00:00:43.492 + disk_prefix=ex9 00:00:43.492 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:43.492 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:43.492 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:43.492 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:43.492 ++ SPDK_TEST_NVME=1 00:00:43.492 ++ SPDK_TEST_FTL=1 00:00:43.492 ++ SPDK_TEST_ISAL=1 00:00:43.492 ++ SPDK_RUN_ASAN=1 00:00:43.492 ++ SPDK_RUN_UBSAN=1 00:00:43.492 ++ SPDK_TEST_XNVME=1 00:00:43.492 ++ SPDK_TEST_NVME_FDP=1 00:00:43.492 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:43.492 ++ RUN_NIGHTLY=1 00:00:43.492 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:43.492 + nvme_files=() 00:00:43.492 + declare -A nvme_files 00:00:43.492 + backend_dir=/var/lib/libvirt/images/backends 00:00:43.492 + nvme_files['nvme.img']=5G 00:00:43.492 + nvme_files['nvme-cmb.img']=5G 00:00:43.492 + nvme_files['nvme-multi0.img']=4G 00:00:43.492 + nvme_files['nvme-multi1.img']=4G 00:00:43.492 + nvme_files['nvme-multi2.img']=4G 00:00:43.492 + nvme_files['nvme-openstack.img']=8G 00:00:43.492 + nvme_files['nvme-zns.img']=5G 00:00:43.492 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:43.492 + (( SPDK_TEST_FTL == 1 )) 00:00:43.492 + nvme_files["nvme-ftl.img"]=6G 00:00:43.492 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:43.492 + nvme_files["nvme-fdp.img"]=1G 00:00:43.492 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:43.492 + for nvme in "${!nvme_files[@]}" 00:00:43.492 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi2.img -s 4G 00:00:43.492 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:43.492 + for nvme in "${!nvme_files[@]}" 00:00:43.493 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-ftl.img -s 6G 00:00:43.751 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:43.751 + for nvme in "${!nvme_files[@]}" 00:00:43.751 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-cmb.img -s 5G 00:00:43.751 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:43.751 + for nvme in "${!nvme_files[@]}" 00:00:43.751 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-openstack.img -s 8G 00:00:44.009 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:44.009 + for nvme in "${!nvme_files[@]}" 00:00:44.009 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-zns.img -s 5G 00:00:44.267 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:44.267 + for nvme in "${!nvme_files[@]}" 00:00:44.267 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi1.img -s 4G 00:00:44.267 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:44.267 + for nvme in "${!nvme_files[@]}" 00:00:44.267 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi0.img -s 4G 00:00:44.267 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:44.267 + for nvme in "${!nvme_files[@]}" 00:00:44.267 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-fdp.img -s 1G 00:00:44.267 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:44.267 + for nvme in "${!nvme_files[@]}" 00:00:44.267 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme.img -s 5G 00:00:44.838 Formatting '/var/lib/libvirt/images/backends/ex9-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:44.838 ++ sudo grep -rl ex9-nvme.img /etc/libvirt/qemu 00:00:44.838 + echo 'End stage prepare_nvme.sh' 00:00:44.838 End stage prepare_nvme.sh 00:00:44.852 [Pipeline] sh 00:00:45.138 + DISTRO=fedora39 00:00:45.138 + CPUS=10 00:00:45.138 + RAM=12288 00:00:45.138 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:45.138 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex9-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex9-nvme.img -b /var/lib/libvirt/images/backends/ex9-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex9-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:45.138 00:00:45.138 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:45.138 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:45.138 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:45.138 HELP=0 00:00:45.138 DRY_RUN=0 00:00:45.138 NVME_FILE=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,/var/lib/libvirt/images/backends/ex9-nvme.img,/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,/var/lib/libvirt/images/backends/ex9-nvme-fdp.img, 00:00:45.138 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:45.138 NVME_AUTO_CREATE=0 00:00:45.138 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,, 00:00:45.138 NVME_CMB=,,,, 00:00:45.138 NVME_PMR=,,,, 00:00:45.138 NVME_ZNS=,,,, 00:00:45.138 NVME_MS=true,,,, 00:00:45.138 NVME_FDP=,,,on, 00:00:45.138 SPDK_VAGRANT_DISTRO=fedora39 00:00:45.138 SPDK_VAGRANT_VMCPU=10 00:00:45.138 SPDK_VAGRANT_VMRAM=12288 00:00:45.138 SPDK_VAGRANT_PROVIDER=libvirt 00:00:45.138 SPDK_VAGRANT_HTTP_PROXY= 00:00:45.138 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:45.138 SPDK_OPENSTACK_NETWORK=0 00:00:45.138 VAGRANT_PACKAGE_BOX=0 00:00:45.138 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:45.138 FORCE_DISTRO=true 00:00:45.138 VAGRANT_BOX_VERSION= 00:00:45.138 EXTRA_VAGRANTFILES= 00:00:45.138 NIC_MODEL=e1000 00:00:45.138 00:00:45.138 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:45.138 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:47.688 Bringing machine 'default' up with 'libvirt' provider... 00:00:47.947 ==> default: Creating image (snapshot of base box volume). 00:00:47.947 ==> default: Creating domain with the following settings... 00:00:47.947 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730307870_c90468ecab78764ed07a 00:00:47.947 ==> default: -- Domain type: kvm 00:00:47.947 ==> default: -- Cpus: 10 00:00:47.947 ==> default: -- Feature: acpi 00:00:47.947 ==> default: -- Feature: apic 00:00:47.947 ==> default: -- Feature: pae 00:00:47.947 ==> default: -- Memory: 12288M 00:00:47.947 ==> default: -- Memory Backing: hugepages: 00:00:47.947 ==> default: -- Management MAC: 00:00:47.947 ==> default: -- Loader: 00:00:47.947 ==> default: -- Nvram: 00:00:47.947 ==> default: -- Base box: spdk/fedora39 00:00:47.947 ==> default: -- Storage pool: default 00:00:47.947 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730307870_c90468ecab78764ed07a.img (20G) 00:00:47.947 ==> default: -- Volume Cache: default 00:00:47.947 ==> default: -- Kernel: 00:00:47.947 ==> default: -- Initrd: 00:00:47.947 ==> default: -- Graphics Type: vnc 00:00:47.947 ==> default: -- Graphics Port: -1 00:00:47.947 ==> default: -- Graphics IP: 127.0.0.1 00:00:47.947 ==> default: -- Graphics Password: Not defined 00:00:47.947 ==> default: -- Video Type: cirrus 00:00:47.947 ==> default: -- Video VRAM: 9216 00:00:47.947 ==> default: -- Sound Type: 00:00:47.947 ==> default: -- Keymap: en-us 00:00:47.947 ==> default: -- TPM Path: 00:00:47.947 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:47.947 ==> default: -- Command line args: 00:00:47.947 ==> default: -> value=-device, 00:00:47.947 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:47.947 ==> default: -> value=-drive, 00:00:47.947 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:47.947 ==> default: -> value=-device, 00:00:47.947 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:47.947 ==> default: -> value=-device, 00:00:47.947 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:47.947 ==> default: -> value=-drive, 00:00:47.947 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme.img,if=none,id=nvme-1-drive0, 00:00:47.947 ==> default: -> value=-device, 00:00:47.947 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:47.947 ==> default: -> value=-device, 00:00:47.947 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:47.947 ==> default: -> value=-drive, 00:00:47.947 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:47.947 ==> default: -> value=-device, 00:00:47.948 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:47.948 ==> default: -> value=-drive, 00:00:47.948 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:47.948 ==> default: -> value=-device, 00:00:47.948 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:47.948 ==> default: -> value=-drive, 00:00:47.948 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:47.948 ==> default: -> value=-device, 00:00:47.948 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:47.948 ==> default: -> value=-device, 00:00:47.948 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:47.948 ==> default: -> value=-device, 00:00:47.948 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:47.948 ==> default: -> value=-drive, 00:00:47.948 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:47.948 ==> default: -> value=-device, 00:00:47.948 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:48.209 ==> default: Creating shared folders metadata... 00:00:48.209 ==> default: Starting domain. 00:00:49.597 ==> default: Waiting for domain to get an IP address... 00:01:07.734 ==> default: Waiting for SSH to become available... 00:01:07.734 ==> default: Configuring and enabling network interfaces... 00:01:11.947 default: SSH address: 192.168.121.88:22 00:01:11.947 default: SSH username: vagrant 00:01:11.947 default: SSH auth method: private key 00:01:13.336 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:21.475 ==> default: Mounting SSHFS shared folder... 00:01:23.413 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:23.413 ==> default: Checking Mount.. 00:01:24.799 ==> default: Folder Successfully Mounted! 00:01:24.799 00:01:24.799 SUCCESS! 00:01:24.799 00:01:24.799 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:24.799 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:24.799 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:24.799 00:01:24.810 [Pipeline] } 00:01:24.825 [Pipeline] // stage 00:01:24.834 [Pipeline] dir 00:01:24.835 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:24.836 [Pipeline] { 00:01:24.849 [Pipeline] catchError 00:01:24.851 [Pipeline] { 00:01:24.864 [Pipeline] sh 00:01:25.149 + vagrant ssh-config --host vagrant 00:01:25.149 + sed -ne '/^Host/,$p' 00:01:25.149 + tee ssh_conf 00:01:28.450 Host vagrant 00:01:28.450 HostName 192.168.121.88 00:01:28.450 User vagrant 00:01:28.450 Port 22 00:01:28.450 UserKnownHostsFile /dev/null 00:01:28.450 StrictHostKeyChecking no 00:01:28.450 PasswordAuthentication no 00:01:28.450 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:28.450 IdentitiesOnly yes 00:01:28.450 LogLevel FATAL 00:01:28.450 ForwardAgent yes 00:01:28.450 ForwardX11 yes 00:01:28.450 00:01:28.465 [Pipeline] withEnv 00:01:28.467 [Pipeline] { 00:01:28.480 [Pipeline] sh 00:01:28.762 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:28.762 source /etc/os-release 00:01:28.762 [[ -e /image.version ]] && img=$(< /image.version) 00:01:28.762 # Minimal, systemd-like check. 00:01:28.762 if [[ -e /.dockerenv ]]; then 00:01:28.762 # Clear garbage from the node'\''s name: 00:01:28.762 # agt-er_autotest_547-896 -> autotest_547-896 00:01:28.762 # $HOSTNAME is the actual container id 00:01:28.762 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:28.762 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:28.762 # We can assume this is a mount from a host where container is running, 00:01:28.762 # so fetch its hostname to easily identify the target swarm worker. 00:01:28.762 container="$(< /etc/hostname) ($agent)" 00:01:28.762 else 00:01:28.762 # Fallback 00:01:28.762 container=$agent 00:01:28.762 fi 00:01:28.762 fi 00:01:28.762 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:28.762 ' 00:01:29.035 [Pipeline] } 00:01:29.050 [Pipeline] // withEnv 00:01:29.057 [Pipeline] setCustomBuildProperty 00:01:29.069 [Pipeline] stage 00:01:29.071 [Pipeline] { (Tests) 00:01:29.086 [Pipeline] sh 00:01:29.370 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:29.384 [Pipeline] sh 00:01:29.667 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:29.944 [Pipeline] timeout 00:01:29.945 Timeout set to expire in 50 min 00:01:29.947 [Pipeline] { 00:01:29.962 [Pipeline] sh 00:01:30.251 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:30.560 HEAD is now at 12fc2abf1 test: Remove autopackage.sh 00:01:30.834 [Pipeline] sh 00:01:31.117 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:31.388 [Pipeline] sh 00:01:31.667 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:31.945 [Pipeline] sh 00:01:32.233 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:32.495 ++ readlink -f spdk_repo 00:01:32.495 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:32.495 + [[ -n /home/vagrant/spdk_repo ]] 00:01:32.495 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:32.495 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:32.495 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:32.495 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:32.495 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:32.495 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:32.495 + cd /home/vagrant/spdk_repo 00:01:32.495 + source /etc/os-release 00:01:32.495 ++ NAME='Fedora Linux' 00:01:32.495 ++ VERSION='39 (Cloud Edition)' 00:01:32.495 ++ ID=fedora 00:01:32.495 ++ VERSION_ID=39 00:01:32.495 ++ VERSION_CODENAME= 00:01:32.495 ++ PLATFORM_ID=platform:f39 00:01:32.495 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:32.495 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:32.495 ++ LOGO=fedora-logo-icon 00:01:32.495 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:32.495 ++ HOME_URL=https://fedoraproject.org/ 00:01:32.495 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:32.495 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:32.495 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:32.495 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:32.495 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:32.495 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:32.495 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:32.495 ++ SUPPORT_END=2024-11-12 00:01:32.495 ++ VARIANT='Cloud Edition' 00:01:32.495 ++ VARIANT_ID=cloud 00:01:32.495 + uname -a 00:01:32.495 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:32.495 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:32.756 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:33.016 Hugepages 00:01:33.016 node hugesize free / total 00:01:33.016 node0 1048576kB 0 / 0 00:01:33.016 node0 2048kB 0 / 0 00:01:33.016 00:01:33.016 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:33.016 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:33.016 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:33.016 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:01:33.016 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:01:33.016 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:33.277 + rm -f /tmp/spdk-ld-path 00:01:33.277 + source autorun-spdk.conf 00:01:33.277 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.277 ++ SPDK_TEST_NVME=1 00:01:33.277 ++ SPDK_TEST_FTL=1 00:01:33.277 ++ SPDK_TEST_ISAL=1 00:01:33.277 ++ SPDK_RUN_ASAN=1 00:01:33.277 ++ SPDK_RUN_UBSAN=1 00:01:33.277 ++ SPDK_TEST_XNVME=1 00:01:33.277 ++ SPDK_TEST_NVME_FDP=1 00:01:33.277 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:33.277 ++ RUN_NIGHTLY=1 00:01:33.277 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:33.277 + [[ -n '' ]] 00:01:33.277 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:33.277 + for M in /var/spdk/build-*-manifest.txt 00:01:33.277 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:33.277 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:33.277 + for M in /var/spdk/build-*-manifest.txt 00:01:33.277 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:33.277 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:33.277 + for M in /var/spdk/build-*-manifest.txt 00:01:33.277 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:33.277 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:33.277 ++ uname 00:01:33.277 + [[ Linux == \L\i\n\u\x ]] 00:01:33.277 + sudo dmesg -T 00:01:33.277 + sudo dmesg --clear 00:01:33.277 + dmesg_pid=5031 00:01:33.277 + [[ Fedora Linux == FreeBSD ]] 00:01:33.277 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:33.277 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:33.277 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:33.277 + [[ -x /usr/src/fio-static/fio ]] 00:01:33.277 + sudo dmesg -Tw 00:01:33.277 + export FIO_BIN=/usr/src/fio-static/fio 00:01:33.277 + FIO_BIN=/usr/src/fio-static/fio 00:01:33.277 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:33.277 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:33.277 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:33.277 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:33.277 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:33.277 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:33.277 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:33.277 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:33.277 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:33.277 17:05:16 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:33.277 17:05:16 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:33.277 17:05:16 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.277 17:05:16 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:33.277 17:05:16 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:33.277 17:05:16 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:33.277 17:05:16 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:33.277 17:05:16 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:33.277 17:05:16 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:33.277 17:05:16 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:33.277 17:05:16 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:33.277 17:05:16 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:01:33.277 17:05:16 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:33.277 17:05:16 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:33.592 17:05:16 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:33.592 17:05:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:33.592 17:05:16 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:33.592 17:05:16 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:33.592 17:05:16 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:33.592 17:05:16 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:33.592 17:05:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.592 17:05:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.592 17:05:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.592 17:05:16 -- paths/export.sh@5 -- $ export PATH 00:01:33.592 17:05:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:33.592 17:05:16 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:33.592 17:05:16 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:33.592 17:05:16 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730307916.XXXXXX 00:01:33.592 17:05:16 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730307916.4uaIOr 00:01:33.592 17:05:16 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:33.592 17:05:16 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:33.592 17:05:16 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:33.592 17:05:16 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:33.592 17:05:16 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:33.592 17:05:16 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:33.592 17:05:16 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:33.592 17:05:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.593 17:05:16 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:33.593 17:05:16 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:33.593 17:05:16 -- pm/common@17 -- $ local monitor 00:01:33.593 17:05:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:33.593 17:05:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:33.593 17:05:16 -- pm/common@25 -- $ sleep 1 00:01:33.593 17:05:16 -- pm/common@21 -- $ date +%s 00:01:33.593 17:05:16 -- pm/common@21 -- $ date +%s 00:01:33.593 17:05:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730307916 00:01:33.593 17:05:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730307916 00:01:33.593 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730307916_collect-cpu-load.pm.log 00:01:33.593 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730307916_collect-vmstat.pm.log 00:01:34.531 17:05:17 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:34.531 17:05:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:34.531 17:05:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:34.531 17:05:17 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:34.531 17:05:17 -- spdk/autobuild.sh@16 -- $ date -u 00:01:34.531 Wed Oct 30 05:05:17 PM UTC 2024 00:01:34.531 17:05:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:34.531 v25.01-pre-123-g12fc2abf1 00:01:34.531 17:05:17 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:34.531 17:05:17 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:34.531 17:05:17 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:34.531 17:05:17 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:34.531 17:05:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.531 ************************************ 00:01:34.531 START TEST asan 00:01:34.531 ************************************ 00:01:34.531 using asan 00:01:34.531 17:05:17 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:01:34.531 00:01:34.531 real 0m0.000s 00:01:34.531 user 0m0.000s 00:01:34.531 sys 0m0.000s 00:01:34.531 ************************************ 00:01:34.531 END TEST asan 00:01:34.531 ************************************ 00:01:34.531 17:05:17 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:34.531 17:05:17 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:34.531 17:05:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:34.531 17:05:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:34.531 17:05:17 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:34.531 17:05:17 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:34.531 17:05:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.531 ************************************ 00:01:34.531 START TEST ubsan 00:01:34.531 ************************************ 00:01:34.531 using ubsan 00:01:34.531 17:05:17 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:34.531 00:01:34.531 real 0m0.000s 00:01:34.531 user 0m0.000s 00:01:34.531 sys 0m0.000s 00:01:34.531 ************************************ 00:01:34.531 END TEST ubsan 00:01:34.531 ************************************ 00:01:34.531 17:05:17 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:34.531 17:05:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:34.531 17:05:17 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:34.531 17:05:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:34.531 17:05:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:34.531 17:05:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:34.531 17:05:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:34.531 17:05:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:34.531 17:05:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:34.531 17:05:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:34.531 17:05:17 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:34.792 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:34.792 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:35.096 Using 'verbs' RDMA provider 00:01:46.032 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:58.322 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:58.322 Creating mk/config.mk...done. 00:01:58.322 Creating mk/cc.flags.mk...done. 00:01:58.322 Type 'make' to build. 00:01:58.322 17:05:39 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:58.322 17:05:39 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:58.322 17:05:39 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:58.322 17:05:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.322 ************************************ 00:01:58.322 START TEST make 00:01:58.322 ************************************ 00:01:58.322 17:05:39 make -- common/autotest_common.sh@1127 -- $ make -j10 00:01:58.322 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:01:58.322 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:01:58.322 meson setup builddir \ 00:01:58.322 -Dwith-libaio=enabled \ 00:01:58.322 -Dwith-liburing=enabled \ 00:01:58.322 -Dwith-libvfn=disabled \ 00:01:58.322 -Dwith-spdk=disabled \ 00:01:58.322 -Dexamples=false \ 00:01:58.322 -Dtests=false \ 00:01:58.322 -Dtools=false && \ 00:01:58.322 meson compile -C builddir && \ 00:01:58.322 cd -) 00:01:58.322 make[1]: Nothing to be done for 'all'. 00:01:59.708 The Meson build system 00:01:59.708 Version: 1.5.0 00:01:59.708 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:01:59.708 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:59.708 Build type: native build 00:01:59.708 Project name: xnvme 00:01:59.708 Project version: 0.7.5 00:01:59.708 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:59.708 C linker for the host machine: cc ld.bfd 2.40-14 00:01:59.708 Host machine cpu family: x86_64 00:01:59.708 Host machine cpu: x86_64 00:01:59.708 Message: host_machine.system: linux 00:01:59.708 Compiler for C supports arguments -Wno-missing-braces: YES 00:01:59.708 Compiler for C supports arguments -Wno-cast-function-type: YES 00:01:59.708 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:59.708 Run-time dependency threads found: YES 00:01:59.708 Has header "setupapi.h" : NO 00:01:59.708 Has header "linux/blkzoned.h" : YES 00:01:59.708 Has header "linux/blkzoned.h" : YES (cached) 00:01:59.708 Has header "libaio.h" : YES 00:01:59.708 Library aio found: YES 00:01:59.708 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:59.708 Run-time dependency liburing found: YES 2.2 00:01:59.708 Dependency libvfn skipped: feature with-libvfn disabled 00:01:59.708 Found CMake: /usr/bin/cmake (3.27.7) 00:01:59.708 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:01:59.708 Subproject spdk : skipped: feature with-spdk disabled 00:01:59.708 Run-time dependency appleframeworks found: NO (tried framework) 00:01:59.708 Run-time dependency appleframeworks found: NO (tried framework) 00:01:59.708 Library rt found: YES 00:01:59.708 Checking for function "clock_gettime" with dependency -lrt: YES 00:01:59.708 Configuring xnvme_config.h using configuration 00:01:59.708 Configuring xnvme.spec using configuration 00:01:59.708 Run-time dependency bash-completion found: YES 2.11 00:01:59.708 Message: Bash-completions: /usr/share/bash-completion/completions 00:01:59.708 Program cp found: YES (/usr/bin/cp) 00:01:59.708 Build targets in project: 3 00:01:59.708 00:01:59.709 xnvme 0.7.5 00:01:59.709 00:01:59.709 Subprojects 00:01:59.709 spdk : NO Feature 'with-spdk' disabled 00:01:59.709 00:01:59.709 User defined options 00:01:59.709 examples : false 00:01:59.709 tests : false 00:01:59.709 tools : false 00:01:59.709 with-libaio : enabled 00:01:59.709 with-liburing: enabled 00:01:59.709 with-libvfn : disabled 00:01:59.709 with-spdk : disabled 00:01:59.709 00:01:59.709 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:59.969 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:01:59.969 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:01:59.969 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:01:59.969 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:01:59.969 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:01:59.969 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:01:59.969 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:01:59.969 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:00.235 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:00.235 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:00.235 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:00.235 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:00.235 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:00.235 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:00.235 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:00.235 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:00.235 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:00.235 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:00.235 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:00.235 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:00.235 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:00.235 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:00.235 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:00.235 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:00.235 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:00.235 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:00.235 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:00.235 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:00.235 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:00.235 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:00.235 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:00.235 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:00.235 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:00.235 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:00.235 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:00.236 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:00.498 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:00.498 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:00.498 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:00.498 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:00.498 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:00.498 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:00.498 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:00.498 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:00.498 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:00.498 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:00.498 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:00.498 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:00.498 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:00.498 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:00.498 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:00.498 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:00.498 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:00.498 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:00.498 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:00.498 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:00.498 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:00.498 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:00.498 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:00.498 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:00.498 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:00.498 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:00.498 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:00.498 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:00.498 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:00.498 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:00.757 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:00.757 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:00.757 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:00.757 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:00.757 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:00.757 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:00.757 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:00.757 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:01.016 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:01.016 [75/76] Linking static target lib/libxnvme.a 00:02:01.016 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:01.016 INFO: autodetecting backend as ninja 00:02:01.016 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:01.016 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:07.595 The Meson build system 00:02:07.595 Version: 1.5.0 00:02:07.595 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:07.595 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:07.595 Build type: native build 00:02:07.595 Program cat found: YES (/usr/bin/cat) 00:02:07.595 Project name: DPDK 00:02:07.595 Project version: 24.03.0 00:02:07.595 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:07.595 C linker for the host machine: cc ld.bfd 2.40-14 00:02:07.595 Host machine cpu family: x86_64 00:02:07.595 Host machine cpu: x86_64 00:02:07.595 Message: ## Building in Developer Mode ## 00:02:07.595 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:07.595 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:07.595 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:07.595 Program python3 found: YES (/usr/bin/python3) 00:02:07.595 Program cat found: YES (/usr/bin/cat) 00:02:07.595 Compiler for C supports arguments -march=native: YES 00:02:07.595 Checking for size of "void *" : 8 00:02:07.595 Checking for size of "void *" : 8 (cached) 00:02:07.595 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:07.595 Library m found: YES 00:02:07.595 Library numa found: YES 00:02:07.595 Has header "numaif.h" : YES 00:02:07.595 Library fdt found: NO 00:02:07.595 Library execinfo found: NO 00:02:07.595 Has header "execinfo.h" : YES 00:02:07.595 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:07.595 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:07.595 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:07.595 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:07.595 Run-time dependency openssl found: YES 3.1.1 00:02:07.595 Run-time dependency libpcap found: YES 1.10.4 00:02:07.595 Has header "pcap.h" with dependency libpcap: YES 00:02:07.595 Compiler for C supports arguments -Wcast-qual: YES 00:02:07.595 Compiler for C supports arguments -Wdeprecated: YES 00:02:07.595 Compiler for C supports arguments -Wformat: YES 00:02:07.595 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:07.595 Compiler for C supports arguments -Wformat-security: NO 00:02:07.595 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:07.595 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:07.595 Compiler for C supports arguments -Wnested-externs: YES 00:02:07.595 Compiler for C supports arguments -Wold-style-definition: YES 00:02:07.595 Compiler for C supports arguments -Wpointer-arith: YES 00:02:07.595 Compiler for C supports arguments -Wsign-compare: YES 00:02:07.595 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:07.595 Compiler for C supports arguments -Wundef: YES 00:02:07.595 Compiler for C supports arguments -Wwrite-strings: YES 00:02:07.595 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:07.595 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:07.595 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:07.595 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:07.595 Program objdump found: YES (/usr/bin/objdump) 00:02:07.595 Compiler for C supports arguments -mavx512f: YES 00:02:07.595 Checking if "AVX512 checking" compiles: YES 00:02:07.595 Fetching value of define "__SSE4_2__" : 1 00:02:07.595 Fetching value of define "__AES__" : 1 00:02:07.595 Fetching value of define "__AVX__" : 1 00:02:07.595 Fetching value of define "__AVX2__" : 1 00:02:07.595 Fetching value of define "__AVX512BW__" : 1 00:02:07.595 Fetching value of define "__AVX512CD__" : 1 00:02:07.595 Fetching value of define "__AVX512DQ__" : 1 00:02:07.595 Fetching value of define "__AVX512F__" : 1 00:02:07.595 Fetching value of define "__AVX512VL__" : 1 00:02:07.595 Fetching value of define "__PCLMUL__" : 1 00:02:07.595 Fetching value of define "__RDRND__" : 1 00:02:07.595 Fetching value of define "__RDSEED__" : 1 00:02:07.595 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:07.595 Fetching value of define "__znver1__" : (undefined) 00:02:07.595 Fetching value of define "__znver2__" : (undefined) 00:02:07.595 Fetching value of define "__znver3__" : (undefined) 00:02:07.595 Fetching value of define "__znver4__" : (undefined) 00:02:07.595 Library asan found: YES 00:02:07.595 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:07.595 Message: lib/log: Defining dependency "log" 00:02:07.595 Message: lib/kvargs: Defining dependency "kvargs" 00:02:07.595 Message: lib/telemetry: Defining dependency "telemetry" 00:02:07.595 Library rt found: YES 00:02:07.596 Checking for function "getentropy" : NO 00:02:07.596 Message: lib/eal: Defining dependency "eal" 00:02:07.596 Message: lib/ring: Defining dependency "ring" 00:02:07.596 Message: lib/rcu: Defining dependency "rcu" 00:02:07.596 Message: lib/mempool: Defining dependency "mempool" 00:02:07.596 Message: lib/mbuf: Defining dependency "mbuf" 00:02:07.596 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:07.596 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:07.596 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:07.596 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:07.596 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:07.596 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:07.596 Compiler for C supports arguments -mpclmul: YES 00:02:07.596 Compiler for C supports arguments -maes: YES 00:02:07.596 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:07.596 Compiler for C supports arguments -mavx512bw: YES 00:02:07.596 Compiler for C supports arguments -mavx512dq: YES 00:02:07.596 Compiler for C supports arguments -mavx512vl: YES 00:02:07.596 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:07.596 Compiler for C supports arguments -mavx2: YES 00:02:07.596 Compiler for C supports arguments -mavx: YES 00:02:07.596 Message: lib/net: Defining dependency "net" 00:02:07.596 Message: lib/meter: Defining dependency "meter" 00:02:07.596 Message: lib/ethdev: Defining dependency "ethdev" 00:02:07.596 Message: lib/pci: Defining dependency "pci" 00:02:07.596 Message: lib/cmdline: Defining dependency "cmdline" 00:02:07.596 Message: lib/hash: Defining dependency "hash" 00:02:07.596 Message: lib/timer: Defining dependency "timer" 00:02:07.596 Message: lib/compressdev: Defining dependency "compressdev" 00:02:07.596 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:07.596 Message: lib/dmadev: Defining dependency "dmadev" 00:02:07.596 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:07.596 Message: lib/power: Defining dependency "power" 00:02:07.596 Message: lib/reorder: Defining dependency "reorder" 00:02:07.596 Message: lib/security: Defining dependency "security" 00:02:07.596 Has header "linux/userfaultfd.h" : YES 00:02:07.596 Has header "linux/vduse.h" : YES 00:02:07.596 Message: lib/vhost: Defining dependency "vhost" 00:02:07.596 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:07.596 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:07.596 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:07.596 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:07.596 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:07.596 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:07.596 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:07.596 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:07.596 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:07.596 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:07.596 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:07.596 Configuring doxy-api-html.conf using configuration 00:02:07.596 Configuring doxy-api-man.conf using configuration 00:02:07.596 Program mandb found: YES (/usr/bin/mandb) 00:02:07.596 Program sphinx-build found: NO 00:02:07.596 Configuring rte_build_config.h using configuration 00:02:07.596 Message: 00:02:07.596 ================= 00:02:07.596 Applications Enabled 00:02:07.596 ================= 00:02:07.596 00:02:07.596 apps: 00:02:07.596 00:02:07.596 00:02:07.596 Message: 00:02:07.596 ================= 00:02:07.596 Libraries Enabled 00:02:07.596 ================= 00:02:07.596 00:02:07.596 libs: 00:02:07.596 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:07.596 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:07.596 cryptodev, dmadev, power, reorder, security, vhost, 00:02:07.596 00:02:07.596 Message: 00:02:07.596 =============== 00:02:07.596 Drivers Enabled 00:02:07.596 =============== 00:02:07.596 00:02:07.596 common: 00:02:07.596 00:02:07.596 bus: 00:02:07.596 pci, vdev, 00:02:07.596 mempool: 00:02:07.596 ring, 00:02:07.596 dma: 00:02:07.596 00:02:07.596 net: 00:02:07.596 00:02:07.596 crypto: 00:02:07.596 00:02:07.596 compress: 00:02:07.596 00:02:07.596 vdpa: 00:02:07.596 00:02:07.596 00:02:07.596 Message: 00:02:07.596 ================= 00:02:07.596 Content Skipped 00:02:07.596 ================= 00:02:07.596 00:02:07.596 apps: 00:02:07.596 dumpcap: explicitly disabled via build config 00:02:07.596 graph: explicitly disabled via build config 00:02:07.596 pdump: explicitly disabled via build config 00:02:07.596 proc-info: explicitly disabled via build config 00:02:07.596 test-acl: explicitly disabled via build config 00:02:07.596 test-bbdev: explicitly disabled via build config 00:02:07.596 test-cmdline: explicitly disabled via build config 00:02:07.596 test-compress-perf: explicitly disabled via build config 00:02:07.596 test-crypto-perf: explicitly disabled via build config 00:02:07.596 test-dma-perf: explicitly disabled via build config 00:02:07.596 test-eventdev: explicitly disabled via build config 00:02:07.596 test-fib: explicitly disabled via build config 00:02:07.596 test-flow-perf: explicitly disabled via build config 00:02:07.596 test-gpudev: explicitly disabled via build config 00:02:07.596 test-mldev: explicitly disabled via build config 00:02:07.596 test-pipeline: explicitly disabled via build config 00:02:07.596 test-pmd: explicitly disabled via build config 00:02:07.596 test-regex: explicitly disabled via build config 00:02:07.596 test-sad: explicitly disabled via build config 00:02:07.596 test-security-perf: explicitly disabled via build config 00:02:07.596 00:02:07.596 libs: 00:02:07.596 argparse: explicitly disabled via build config 00:02:07.596 metrics: explicitly disabled via build config 00:02:07.596 acl: explicitly disabled via build config 00:02:07.596 bbdev: explicitly disabled via build config 00:02:07.596 bitratestats: explicitly disabled via build config 00:02:07.596 bpf: explicitly disabled via build config 00:02:07.596 cfgfile: explicitly disabled via build config 00:02:07.596 distributor: explicitly disabled via build config 00:02:07.596 efd: explicitly disabled via build config 00:02:07.596 eventdev: explicitly disabled via build config 00:02:07.596 dispatcher: explicitly disabled via build config 00:02:07.596 gpudev: explicitly disabled via build config 00:02:07.596 gro: explicitly disabled via build config 00:02:07.596 gso: explicitly disabled via build config 00:02:07.596 ip_frag: explicitly disabled via build config 00:02:07.596 jobstats: explicitly disabled via build config 00:02:07.596 latencystats: explicitly disabled via build config 00:02:07.596 lpm: explicitly disabled via build config 00:02:07.596 member: explicitly disabled via build config 00:02:07.596 pcapng: explicitly disabled via build config 00:02:07.596 rawdev: explicitly disabled via build config 00:02:07.596 regexdev: explicitly disabled via build config 00:02:07.596 mldev: explicitly disabled via build config 00:02:07.596 rib: explicitly disabled via build config 00:02:07.596 sched: explicitly disabled via build config 00:02:07.596 stack: explicitly disabled via build config 00:02:07.596 ipsec: explicitly disabled via build config 00:02:07.596 pdcp: explicitly disabled via build config 00:02:07.596 fib: explicitly disabled via build config 00:02:07.596 port: explicitly disabled via build config 00:02:07.596 pdump: explicitly disabled via build config 00:02:07.596 table: explicitly disabled via build config 00:02:07.596 pipeline: explicitly disabled via build config 00:02:07.596 graph: explicitly disabled via build config 00:02:07.596 node: explicitly disabled via build config 00:02:07.596 00:02:07.596 drivers: 00:02:07.596 common/cpt: not in enabled drivers build config 00:02:07.596 common/dpaax: not in enabled drivers build config 00:02:07.596 common/iavf: not in enabled drivers build config 00:02:07.596 common/idpf: not in enabled drivers build config 00:02:07.596 common/ionic: not in enabled drivers build config 00:02:07.596 common/mvep: not in enabled drivers build config 00:02:07.596 common/octeontx: not in enabled drivers build config 00:02:07.596 bus/auxiliary: not in enabled drivers build config 00:02:07.596 bus/cdx: not in enabled drivers build config 00:02:07.596 bus/dpaa: not in enabled drivers build config 00:02:07.596 bus/fslmc: not in enabled drivers build config 00:02:07.596 bus/ifpga: not in enabled drivers build config 00:02:07.596 bus/platform: not in enabled drivers build config 00:02:07.596 bus/uacce: not in enabled drivers build config 00:02:07.596 bus/vmbus: not in enabled drivers build config 00:02:07.596 common/cnxk: not in enabled drivers build config 00:02:07.596 common/mlx5: not in enabled drivers build config 00:02:07.596 common/nfp: not in enabled drivers build config 00:02:07.596 common/nitrox: not in enabled drivers build config 00:02:07.596 common/qat: not in enabled drivers build config 00:02:07.596 common/sfc_efx: not in enabled drivers build config 00:02:07.596 mempool/bucket: not in enabled drivers build config 00:02:07.596 mempool/cnxk: not in enabled drivers build config 00:02:07.596 mempool/dpaa: not in enabled drivers build config 00:02:07.596 mempool/dpaa2: not in enabled drivers build config 00:02:07.596 mempool/octeontx: not in enabled drivers build config 00:02:07.596 mempool/stack: not in enabled drivers build config 00:02:07.596 dma/cnxk: not in enabled drivers build config 00:02:07.596 dma/dpaa: not in enabled drivers build config 00:02:07.596 dma/dpaa2: not in enabled drivers build config 00:02:07.596 dma/hisilicon: not in enabled drivers build config 00:02:07.596 dma/idxd: not in enabled drivers build config 00:02:07.596 dma/ioat: not in enabled drivers build config 00:02:07.596 dma/skeleton: not in enabled drivers build config 00:02:07.596 net/af_packet: not in enabled drivers build config 00:02:07.596 net/af_xdp: not in enabled drivers build config 00:02:07.596 net/ark: not in enabled drivers build config 00:02:07.596 net/atlantic: not in enabled drivers build config 00:02:07.596 net/avp: not in enabled drivers build config 00:02:07.596 net/axgbe: not in enabled drivers build config 00:02:07.596 net/bnx2x: not in enabled drivers build config 00:02:07.596 net/bnxt: not in enabled drivers build config 00:02:07.596 net/bonding: not in enabled drivers build config 00:02:07.596 net/cnxk: not in enabled drivers build config 00:02:07.596 net/cpfl: not in enabled drivers build config 00:02:07.596 net/cxgbe: not in enabled drivers build config 00:02:07.596 net/dpaa: not in enabled drivers build config 00:02:07.596 net/dpaa2: not in enabled drivers build config 00:02:07.596 net/e1000: not in enabled drivers build config 00:02:07.597 net/ena: not in enabled drivers build config 00:02:07.597 net/enetc: not in enabled drivers build config 00:02:07.597 net/enetfec: not in enabled drivers build config 00:02:07.597 net/enic: not in enabled drivers build config 00:02:07.597 net/failsafe: not in enabled drivers build config 00:02:07.597 net/fm10k: not in enabled drivers build config 00:02:07.597 net/gve: not in enabled drivers build config 00:02:07.597 net/hinic: not in enabled drivers build config 00:02:07.597 net/hns3: not in enabled drivers build config 00:02:07.597 net/i40e: not in enabled drivers build config 00:02:07.597 net/iavf: not in enabled drivers build config 00:02:07.597 net/ice: not in enabled drivers build config 00:02:07.597 net/idpf: not in enabled drivers build config 00:02:07.597 net/igc: not in enabled drivers build config 00:02:07.597 net/ionic: not in enabled drivers build config 00:02:07.597 net/ipn3ke: not in enabled drivers build config 00:02:07.597 net/ixgbe: not in enabled drivers build config 00:02:07.597 net/mana: not in enabled drivers build config 00:02:07.597 net/memif: not in enabled drivers build config 00:02:07.597 net/mlx4: not in enabled drivers build config 00:02:07.597 net/mlx5: not in enabled drivers build config 00:02:07.597 net/mvneta: not in enabled drivers build config 00:02:07.597 net/mvpp2: not in enabled drivers build config 00:02:07.597 net/netvsc: not in enabled drivers build config 00:02:07.597 net/nfb: not in enabled drivers build config 00:02:07.597 net/nfp: not in enabled drivers build config 00:02:07.597 net/ngbe: not in enabled drivers build config 00:02:07.597 net/null: not in enabled drivers build config 00:02:07.597 net/octeontx: not in enabled drivers build config 00:02:07.597 net/octeon_ep: not in enabled drivers build config 00:02:07.597 net/pcap: not in enabled drivers build config 00:02:07.597 net/pfe: not in enabled drivers build config 00:02:07.597 net/qede: not in enabled drivers build config 00:02:07.597 net/ring: not in enabled drivers build config 00:02:07.597 net/sfc: not in enabled drivers build config 00:02:07.597 net/softnic: not in enabled drivers build config 00:02:07.597 net/tap: not in enabled drivers build config 00:02:07.597 net/thunderx: not in enabled drivers build config 00:02:07.597 net/txgbe: not in enabled drivers build config 00:02:07.597 net/vdev_netvsc: not in enabled drivers build config 00:02:07.597 net/vhost: not in enabled drivers build config 00:02:07.597 net/virtio: not in enabled drivers build config 00:02:07.597 net/vmxnet3: not in enabled drivers build config 00:02:07.597 raw/*: missing internal dependency, "rawdev" 00:02:07.597 crypto/armv8: not in enabled drivers build config 00:02:07.597 crypto/bcmfs: not in enabled drivers build config 00:02:07.597 crypto/caam_jr: not in enabled drivers build config 00:02:07.597 crypto/ccp: not in enabled drivers build config 00:02:07.597 crypto/cnxk: not in enabled drivers build config 00:02:07.597 crypto/dpaa_sec: not in enabled drivers build config 00:02:07.597 crypto/dpaa2_sec: not in enabled drivers build config 00:02:07.597 crypto/ipsec_mb: not in enabled drivers build config 00:02:07.597 crypto/mlx5: not in enabled drivers build config 00:02:07.597 crypto/mvsam: not in enabled drivers build config 00:02:07.597 crypto/nitrox: not in enabled drivers build config 00:02:07.597 crypto/null: not in enabled drivers build config 00:02:07.597 crypto/octeontx: not in enabled drivers build config 00:02:07.597 crypto/openssl: not in enabled drivers build config 00:02:07.597 crypto/scheduler: not in enabled drivers build config 00:02:07.597 crypto/uadk: not in enabled drivers build config 00:02:07.597 crypto/virtio: not in enabled drivers build config 00:02:07.597 compress/isal: not in enabled drivers build config 00:02:07.597 compress/mlx5: not in enabled drivers build config 00:02:07.597 compress/nitrox: not in enabled drivers build config 00:02:07.597 compress/octeontx: not in enabled drivers build config 00:02:07.597 compress/zlib: not in enabled drivers build config 00:02:07.597 regex/*: missing internal dependency, "regexdev" 00:02:07.597 ml/*: missing internal dependency, "mldev" 00:02:07.597 vdpa/ifc: not in enabled drivers build config 00:02:07.597 vdpa/mlx5: not in enabled drivers build config 00:02:07.597 vdpa/nfp: not in enabled drivers build config 00:02:07.597 vdpa/sfc: not in enabled drivers build config 00:02:07.597 event/*: missing internal dependency, "eventdev" 00:02:07.597 baseband/*: missing internal dependency, "bbdev" 00:02:07.597 gpu/*: missing internal dependency, "gpudev" 00:02:07.597 00:02:07.597 00:02:07.597 Build targets in project: 84 00:02:07.597 00:02:07.597 DPDK 24.03.0 00:02:07.597 00:02:07.597 User defined options 00:02:07.597 buildtype : debug 00:02:07.597 default_library : shared 00:02:07.597 libdir : lib 00:02:07.597 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:07.597 b_sanitize : address 00:02:07.597 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:07.597 c_link_args : 00:02:07.597 cpu_instruction_set: native 00:02:07.597 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:07.597 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:07.597 enable_docs : false 00:02:07.597 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:07.597 enable_kmods : false 00:02:07.597 max_lcores : 128 00:02:07.597 tests : false 00:02:07.597 00:02:07.597 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:07.597 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:07.856 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:07.856 [2/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:07.856 [3/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:07.856 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:07.856 [5/267] Linking static target lib/librte_log.a 00:02:07.856 [6/267] Linking static target lib/librte_kvargs.a 00:02:07.856 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:08.115 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:08.115 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:08.115 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:08.115 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:08.115 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:08.115 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:08.115 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:08.115 [15/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.115 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:08.115 [17/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:08.115 [18/267] Linking static target lib/librte_telemetry.a 00:02:08.373 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:08.373 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:08.373 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:08.373 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:08.373 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:08.373 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:08.632 [25/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.632 [26/267] Linking target lib/librte_log.so.24.1 00:02:08.632 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:08.632 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:08.632 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:08.632 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:08.632 [31/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:08.890 [32/267] Linking target lib/librte_kvargs.so.24.1 00:02:08.890 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:08.890 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:08.890 [35/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.890 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:08.890 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:08.890 [38/267] Linking target lib/librte_telemetry.so.24.1 00:02:08.890 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:08.890 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:08.890 [41/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:08.890 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:08.890 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:09.149 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:09.149 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:09.149 [46/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:09.149 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:09.149 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:09.407 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:09.407 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:09.407 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:09.407 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:09.407 [53/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:09.407 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:09.407 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:09.407 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:09.407 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:09.666 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:09.666 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:09.666 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:09.666 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:09.666 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:09.666 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:09.666 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:09.666 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:09.925 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:09.925 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:09.925 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:09.925 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:09.925 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:09.925 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:10.183 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:10.183 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:10.183 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:10.183 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:10.183 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:10.183 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:10.183 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:10.441 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:10.441 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:10.441 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:10.441 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:10.441 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:10.441 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:10.441 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:10.441 [86/267] Linking static target lib/librte_ring.a 00:02:10.699 [87/267] Linking static target lib/librte_eal.a 00:02:10.699 [88/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:10.699 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:10.699 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:10.699 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:10.958 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:10.958 [93/267] Linking static target lib/librte_mempool.a 00:02:10.958 [94/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:10.958 [95/267] Linking static target lib/librte_rcu.a 00:02:10.958 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:10.958 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.958 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:11.218 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:11.218 [100/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.218 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:11.218 [102/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:11.218 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:11.218 [104/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:11.218 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:11.218 [106/267] Linking static target lib/librte_net.a 00:02:11.476 [107/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:11.476 [108/267] Linking static target lib/librte_meter.a 00:02:11.476 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:11.476 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:11.476 [111/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:11.476 [112/267] Linking static target lib/librte_mbuf.a 00:02:11.476 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:11.735 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:11.735 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.735 [116/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.735 [117/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.994 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:11.994 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:11.994 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:12.253 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:12.253 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:12.253 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:12.513 [124/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.513 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:12.513 [126/267] Linking static target lib/librte_pci.a 00:02:12.513 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:12.513 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:12.513 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:12.513 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:12.513 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:12.513 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:12.825 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:12.825 [134/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.825 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:12.825 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:12.825 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:12.825 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:12.825 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:12.825 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:12.825 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:12.825 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:12.825 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:12.825 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:13.085 [145/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:13.085 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:13.085 [147/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:13.085 [148/267] Linking static target lib/librte_cmdline.a 00:02:13.085 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:13.085 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:13.085 [151/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:13.343 [152/267] Linking static target lib/librte_timer.a 00:02:13.343 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:13.343 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:13.343 [155/267] Linking static target lib/librte_ethdev.a 00:02:13.343 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:13.343 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:13.602 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:13.602 [159/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:13.602 [160/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:13.602 [161/267] Linking static target lib/librte_compressdev.a 00:02:13.602 [162/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:13.602 [163/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.861 [164/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:13.861 [165/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:13.861 [166/267] Linking static target lib/librte_hash.a 00:02:13.861 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:13.861 [168/267] Linking static target lib/librte_dmadev.a 00:02:13.861 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:13.861 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:14.120 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:14.120 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:14.120 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.120 [174/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:14.120 [175/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:14.379 [176/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:14.379 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:14.379 [178/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.379 [179/267] Linking static target lib/librte_cryptodev.a 00:02:14.379 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:14.379 [181/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.379 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:14.638 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.638 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:14.638 [185/267] Linking static target lib/librte_power.a 00:02:14.638 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:14.638 [187/267] Linking static target lib/librte_reorder.a 00:02:14.896 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:14.896 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:14.896 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:14.896 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:14.896 [192/267] Linking static target lib/librte_security.a 00:02:15.155 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.155 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:15.415 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.415 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:15.415 [197/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.415 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:15.415 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:15.673 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:15.673 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:15.930 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:15.930 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:15.930 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:15.930 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:15.930 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:15.930 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:15.930 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:15.930 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:16.187 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.187 [211/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:16.187 [212/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:16.187 [213/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:16.187 [214/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:16.187 [215/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:16.187 [216/267] Linking static target drivers/librte_bus_pci.a 00:02:16.187 [217/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:16.187 [218/267] Linking static target drivers/librte_bus_vdev.a 00:02:16.187 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:16.187 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:16.446 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.446 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:16.446 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.446 [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.446 [225/267] Linking static target drivers/librte_mempool_ring.a 00:02:16.446 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.012 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:17.577 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.835 [229/267] Linking target lib/librte_eal.so.24.1 00:02:17.835 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:17.835 [231/267] Linking target lib/librte_ring.so.24.1 00:02:17.835 [232/267] Linking target lib/librte_pci.so.24.1 00:02:17.835 [233/267] Linking target lib/librte_meter.so.24.1 00:02:17.835 [234/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:17.835 [235/267] Linking target lib/librte_timer.so.24.1 00:02:17.835 [236/267] Linking target lib/librte_dmadev.so.24.1 00:02:17.835 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:17.835 [238/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:18.092 [239/267] Linking target lib/librte_rcu.so.24.1 00:02:18.092 [240/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:18.092 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:18.092 [242/267] Linking target lib/librte_mempool.so.24.1 00:02:18.092 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:18.092 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:18.092 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:18.092 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:18.092 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:18.092 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:18.351 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:18.351 [250/267] Linking target lib/librte_cryptodev.so.24.1 00:02:18.351 [251/267] Linking target lib/librte_compressdev.so.24.1 00:02:18.351 [252/267] Linking target lib/librte_reorder.so.24.1 00:02:18.351 [253/267] Linking target lib/librte_net.so.24.1 00:02:18.351 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:18.351 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:18.351 [256/267] Linking target lib/librte_hash.so.24.1 00:02:18.351 [257/267] Linking target lib/librte_cmdline.so.24.1 00:02:18.351 [258/267] Linking target lib/librte_security.so.24.1 00:02:18.608 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:18.608 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.608 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:18.865 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:18.865 [263/267] Linking target lib/librte_power.so.24.1 00:02:19.431 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:19.691 [265/267] Linking static target lib/librte_vhost.a 00:02:20.625 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.626 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:20.626 INFO: autodetecting backend as ninja 00:02:20.626 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:35.491 CC lib/ut_mock/mock.o 00:02:35.491 CC lib/ut/ut.o 00:02:35.491 CC lib/log/log.o 00:02:35.491 CC lib/log/log_flags.o 00:02:35.491 CC lib/log/log_deprecated.o 00:02:35.491 LIB libspdk_ut.a 00:02:35.491 LIB libspdk_ut_mock.a 00:02:35.491 SO libspdk_ut.so.2.0 00:02:35.491 SO libspdk_ut_mock.so.6.0 00:02:35.491 LIB libspdk_log.a 00:02:35.491 SO libspdk_log.so.7.1 00:02:35.491 SYMLINK libspdk_ut.so 00:02:35.491 SYMLINK libspdk_ut_mock.so 00:02:35.491 SYMLINK libspdk_log.so 00:02:35.491 CC lib/ioat/ioat.o 00:02:35.491 CC lib/dma/dma.o 00:02:35.491 CXX lib/trace_parser/trace.o 00:02:35.491 CC lib/util/base64.o 00:02:35.491 CC lib/util/bit_array.o 00:02:35.491 CC lib/util/crc16.o 00:02:35.491 CC lib/util/crc32.o 00:02:35.491 CC lib/util/cpuset.o 00:02:35.491 CC lib/util/crc32c.o 00:02:35.491 CC lib/vfio_user/host/vfio_user_pci.o 00:02:35.491 CC lib/util/crc32_ieee.o 00:02:35.491 CC lib/vfio_user/host/vfio_user.o 00:02:35.491 CC lib/util/crc64.o 00:02:35.491 CC lib/util/dif.o 00:02:35.491 LIB libspdk_dma.a 00:02:35.491 CC lib/util/fd.o 00:02:35.491 SO libspdk_dma.so.5.0 00:02:35.491 CC lib/util/fd_group.o 00:02:35.491 CC lib/util/file.o 00:02:35.491 CC lib/util/hexlify.o 00:02:35.491 SYMLINK libspdk_dma.so 00:02:35.491 CC lib/util/iov.o 00:02:35.491 LIB libspdk_ioat.a 00:02:35.491 CC lib/util/math.o 00:02:35.491 SO libspdk_ioat.so.7.0 00:02:35.491 LIB libspdk_vfio_user.a 00:02:35.491 CC lib/util/net.o 00:02:35.491 SO libspdk_vfio_user.so.5.0 00:02:35.491 SYMLINK libspdk_ioat.so 00:02:35.491 CC lib/util/pipe.o 00:02:35.491 CC lib/util/strerror_tls.o 00:02:35.491 CC lib/util/string.o 00:02:35.491 SYMLINK libspdk_vfio_user.so 00:02:35.491 CC lib/util/uuid.o 00:02:35.491 CC lib/util/xor.o 00:02:35.491 CC lib/util/zipf.o 00:02:35.491 CC lib/util/md5.o 00:02:35.491 LIB libspdk_util.a 00:02:35.491 SO libspdk_util.so.10.0 00:02:35.491 LIB libspdk_trace_parser.a 00:02:35.491 SO libspdk_trace_parser.so.6.0 00:02:35.491 SYMLINK libspdk_util.so 00:02:35.491 SYMLINK libspdk_trace_parser.so 00:02:35.491 CC lib/idxd/idxd.o 00:02:35.491 CC lib/env_dpdk/env.o 00:02:35.491 CC lib/env_dpdk/memory.o 00:02:35.491 CC lib/env_dpdk/pci.o 00:02:35.491 CC lib/idxd/idxd_user.o 00:02:35.491 CC lib/conf/conf.o 00:02:35.491 CC lib/rdma_provider/common.o 00:02:35.491 CC lib/json/json_parse.o 00:02:35.491 CC lib/vmd/vmd.o 00:02:35.491 CC lib/rdma_utils/rdma_utils.o 00:02:35.491 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:35.491 LIB libspdk_conf.a 00:02:35.491 CC lib/idxd/idxd_kernel.o 00:02:35.492 CC lib/json/json_util.o 00:02:35.492 SO libspdk_conf.so.6.0 00:02:35.492 LIB libspdk_rdma_utils.a 00:02:35.492 SO libspdk_rdma_utils.so.1.0 00:02:35.492 SYMLINK libspdk_conf.so 00:02:35.492 CC lib/json/json_write.o 00:02:35.492 SYMLINK libspdk_rdma_utils.so 00:02:35.492 CC lib/vmd/led.o 00:02:35.492 LIB libspdk_rdma_provider.a 00:02:35.492 CC lib/env_dpdk/init.o 00:02:35.492 CC lib/env_dpdk/threads.o 00:02:35.492 SO libspdk_rdma_provider.so.6.0 00:02:35.492 SYMLINK libspdk_rdma_provider.so 00:02:35.492 CC lib/env_dpdk/pci_ioat.o 00:02:35.492 CC lib/env_dpdk/pci_virtio.o 00:02:35.492 CC lib/env_dpdk/pci_vmd.o 00:02:35.492 LIB libspdk_idxd.a 00:02:35.492 CC lib/env_dpdk/pci_idxd.o 00:02:35.492 SO libspdk_idxd.so.12.1 00:02:35.492 CC lib/env_dpdk/pci_event.o 00:02:35.492 SYMLINK libspdk_idxd.so 00:02:35.492 CC lib/env_dpdk/sigbus_handler.o 00:02:35.492 CC lib/env_dpdk/pci_dpdk.o 00:02:35.492 LIB libspdk_json.a 00:02:35.492 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:35.492 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:35.492 SO libspdk_json.so.6.0 00:02:35.492 SYMLINK libspdk_json.so 00:02:35.492 LIB libspdk_vmd.a 00:02:35.492 SO libspdk_vmd.so.6.0 00:02:35.492 SYMLINK libspdk_vmd.so 00:02:35.492 CC lib/jsonrpc/jsonrpc_server.o 00:02:35.492 CC lib/jsonrpc/jsonrpc_client.o 00:02:35.492 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:35.492 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:35.751 LIB libspdk_jsonrpc.a 00:02:35.751 SO libspdk_jsonrpc.so.6.0 00:02:35.751 SYMLINK libspdk_jsonrpc.so 00:02:36.009 CC lib/rpc/rpc.o 00:02:36.009 LIB libspdk_env_dpdk.a 00:02:36.009 SO libspdk_env_dpdk.so.15.1 00:02:36.267 LIB libspdk_rpc.a 00:02:36.267 SO libspdk_rpc.so.6.0 00:02:36.267 SYMLINK libspdk_env_dpdk.so 00:02:36.267 SYMLINK libspdk_rpc.so 00:02:36.526 CC lib/keyring/keyring.o 00:02:36.526 CC lib/keyring/keyring_rpc.o 00:02:36.526 CC lib/trace/trace.o 00:02:36.526 CC lib/trace/trace_flags.o 00:02:36.526 CC lib/trace/trace_rpc.o 00:02:36.526 CC lib/notify/notify.o 00:02:36.526 CC lib/notify/notify_rpc.o 00:02:36.526 LIB libspdk_notify.a 00:02:36.526 SO libspdk_notify.so.6.0 00:02:36.526 SYMLINK libspdk_notify.so 00:02:36.526 LIB libspdk_keyring.a 00:02:36.526 SO libspdk_keyring.so.2.0 00:02:36.526 LIB libspdk_trace.a 00:02:36.785 SO libspdk_trace.so.11.0 00:02:36.785 SYMLINK libspdk_keyring.so 00:02:36.785 SYMLINK libspdk_trace.so 00:02:36.785 CC lib/sock/sock.o 00:02:36.785 CC lib/thread/iobuf.o 00:02:36.785 CC lib/sock/sock_rpc.o 00:02:36.785 CC lib/thread/thread.o 00:02:37.351 LIB libspdk_sock.a 00:02:37.351 SO libspdk_sock.so.10.0 00:02:37.351 SYMLINK libspdk_sock.so 00:02:37.609 CC lib/nvme/nvme_ctrlr.o 00:02:37.609 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:37.609 CC lib/nvme/nvme_fabric.o 00:02:37.609 CC lib/nvme/nvme_ns_cmd.o 00:02:37.609 CC lib/nvme/nvme_pcie_common.o 00:02:37.609 CC lib/nvme/nvme_ns.o 00:02:37.609 CC lib/nvme/nvme_pcie.o 00:02:37.609 CC lib/nvme/nvme_qpair.o 00:02:37.609 CC lib/nvme/nvme.o 00:02:38.176 CC lib/nvme/nvme_quirks.o 00:02:38.176 CC lib/nvme/nvme_transport.o 00:02:38.176 CC lib/nvme/nvme_discovery.o 00:02:38.176 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:38.437 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:38.437 LIB libspdk_thread.a 00:02:38.437 CC lib/nvme/nvme_tcp.o 00:02:38.437 SO libspdk_thread.so.11.0 00:02:38.437 CC lib/nvme/nvme_opal.o 00:02:38.437 SYMLINK libspdk_thread.so 00:02:38.437 CC lib/nvme/nvme_io_msg.o 00:02:38.437 CC lib/nvme/nvme_poll_group.o 00:02:38.698 CC lib/nvme/nvme_zns.o 00:02:38.698 CC lib/nvme/nvme_stubs.o 00:02:38.698 CC lib/nvme/nvme_auth.o 00:02:38.698 CC lib/nvme/nvme_cuse.o 00:02:38.958 CC lib/nvme/nvme_rdma.o 00:02:38.958 CC lib/accel/accel.o 00:02:38.958 CC lib/blob/blobstore.o 00:02:39.219 CC lib/init/json_config.o 00:02:39.219 CC lib/virtio/virtio.o 00:02:39.480 CC lib/fsdev/fsdev.o 00:02:39.480 CC lib/init/subsystem.o 00:02:39.480 CC lib/init/subsystem_rpc.o 00:02:39.480 CC lib/virtio/virtio_vhost_user.o 00:02:39.480 CC lib/virtio/virtio_vfio_user.o 00:02:39.480 CC lib/init/rpc.o 00:02:39.741 CC lib/blob/request.o 00:02:39.741 LIB libspdk_init.a 00:02:39.741 CC lib/accel/accel_rpc.o 00:02:39.741 SO libspdk_init.so.6.0 00:02:39.741 CC lib/accel/accel_sw.o 00:02:39.741 SYMLINK libspdk_init.so 00:02:39.741 CC lib/virtio/virtio_pci.o 00:02:39.741 CC lib/fsdev/fsdev_io.o 00:02:39.741 CC lib/fsdev/fsdev_rpc.o 00:02:40.009 CC lib/blob/zeroes.o 00:02:40.009 CC lib/blob/blob_bs_dev.o 00:02:40.009 LIB libspdk_virtio.a 00:02:40.009 SO libspdk_virtio.so.7.0 00:02:40.009 LIB libspdk_accel.a 00:02:40.009 CC lib/event/reactor.o 00:02:40.009 CC lib/event/scheduler_static.o 00:02:40.009 CC lib/event/app_rpc.o 00:02:40.009 CC lib/event/app.o 00:02:40.009 CC lib/event/log_rpc.o 00:02:40.009 SO libspdk_accel.so.16.0 00:02:40.009 SYMLINK libspdk_virtio.so 00:02:40.288 SYMLINK libspdk_accel.so 00:02:40.288 LIB libspdk_fsdev.a 00:02:40.288 SO libspdk_fsdev.so.2.0 00:02:40.288 LIB libspdk_nvme.a 00:02:40.288 CC lib/bdev/bdev.o 00:02:40.288 CC lib/bdev/bdev_rpc.o 00:02:40.288 CC lib/bdev/scsi_nvme.o 00:02:40.288 SYMLINK libspdk_fsdev.so 00:02:40.288 CC lib/bdev/bdev_zone.o 00:02:40.288 CC lib/bdev/part.o 00:02:40.548 SO libspdk_nvme.so.14.1 00:02:40.548 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:40.548 LIB libspdk_event.a 00:02:40.548 SO libspdk_event.so.14.0 00:02:40.548 SYMLINK libspdk_event.so 00:02:40.548 SYMLINK libspdk_nvme.so 00:02:41.119 LIB libspdk_fuse_dispatcher.a 00:02:41.119 SO libspdk_fuse_dispatcher.so.1.0 00:02:41.119 SYMLINK libspdk_fuse_dispatcher.so 00:02:42.058 LIB libspdk_blob.a 00:02:42.058 SO libspdk_blob.so.11.0 00:02:42.058 SYMLINK libspdk_blob.so 00:02:42.315 CC lib/blobfs/blobfs.o 00:02:42.315 CC lib/blobfs/tree.o 00:02:42.315 CC lib/lvol/lvol.o 00:02:42.879 LIB libspdk_bdev.a 00:02:42.879 SO libspdk_bdev.so.17.0 00:02:43.136 SYMLINK libspdk_bdev.so 00:02:43.136 LIB libspdk_blobfs.a 00:02:43.136 SO libspdk_blobfs.so.10.0 00:02:43.136 SYMLINK libspdk_blobfs.so 00:02:43.136 LIB libspdk_lvol.a 00:02:43.136 CC lib/nvmf/ctrlr.o 00:02:43.136 CC lib/nvmf/ctrlr_discovery.o 00:02:43.136 CC lib/nvmf/ctrlr_bdev.o 00:02:43.136 CC lib/scsi/lun.o 00:02:43.136 CC lib/scsi/dev.o 00:02:43.136 CC lib/ublk/ublk.o 00:02:43.136 CC lib/nvmf/subsystem.o 00:02:43.136 CC lib/ftl/ftl_core.o 00:02:43.136 SO libspdk_lvol.so.10.0 00:02:43.137 CC lib/nbd/nbd.o 00:02:43.137 SYMLINK libspdk_lvol.so 00:02:43.394 CC lib/ftl/ftl_init.o 00:02:43.394 CC lib/nvmf/nvmf.o 00:02:43.394 CC lib/nvmf/nvmf_rpc.o 00:02:43.394 CC lib/scsi/port.o 00:02:43.651 CC lib/nbd/nbd_rpc.o 00:02:43.651 CC lib/scsi/scsi.o 00:02:43.651 CC lib/ftl/ftl_layout.o 00:02:43.651 CC lib/ftl/ftl_debug.o 00:02:43.651 CC lib/scsi/scsi_bdev.o 00:02:43.651 LIB libspdk_nbd.a 00:02:43.651 SO libspdk_nbd.so.7.0 00:02:43.908 SYMLINK libspdk_nbd.so 00:02:43.908 CC lib/ftl/ftl_io.o 00:02:43.908 CC lib/nvmf/transport.o 00:02:43.908 CC lib/ublk/ublk_rpc.o 00:02:43.908 CC lib/nvmf/tcp.o 00:02:43.908 CC lib/nvmf/stubs.o 00:02:43.908 LIB libspdk_ublk.a 00:02:43.908 CC lib/scsi/scsi_pr.o 00:02:44.164 CC lib/ftl/ftl_sb.o 00:02:44.164 SO libspdk_ublk.so.3.0 00:02:44.164 CC lib/ftl/ftl_l2p.o 00:02:44.164 CC lib/ftl/ftl_l2p_flat.o 00:02:44.164 SYMLINK libspdk_ublk.so 00:02:44.164 CC lib/ftl/ftl_nv_cache.o 00:02:44.164 CC lib/nvmf/mdns_server.o 00:02:44.164 CC lib/nvmf/rdma.o 00:02:44.164 CC lib/ftl/ftl_band.o 00:02:44.164 CC lib/ftl/ftl_band_ops.o 00:02:44.164 CC lib/scsi/scsi_rpc.o 00:02:44.164 CC lib/nvmf/auth.o 00:02:44.420 CC lib/scsi/task.o 00:02:44.420 CC lib/ftl/ftl_writer.o 00:02:44.420 CC lib/ftl/ftl_rq.o 00:02:44.420 CC lib/ftl/ftl_reloc.o 00:02:44.420 LIB libspdk_scsi.a 00:02:44.420 CC lib/ftl/ftl_l2p_cache.o 00:02:44.420 SO libspdk_scsi.so.9.0 00:02:44.677 SYMLINK libspdk_scsi.so 00:02:44.677 CC lib/ftl/ftl_p2l.o 00:02:44.677 CC lib/ftl/ftl_p2l_log.o 00:02:44.677 CC lib/ftl/mngt/ftl_mngt.o 00:02:44.677 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:44.935 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:44.935 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:44.935 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:44.935 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:44.935 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:45.193 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:45.193 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:45.193 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:45.193 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:45.193 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:45.193 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:45.193 CC lib/iscsi/conn.o 00:02:45.193 CC lib/ftl/utils/ftl_conf.o 00:02:45.193 CC lib/ftl/utils/ftl_md.o 00:02:45.193 CC lib/iscsi/init_grp.o 00:02:45.193 CC lib/iscsi/iscsi.o 00:02:45.450 CC lib/iscsi/param.o 00:02:45.450 CC lib/iscsi/portal_grp.o 00:02:45.450 CC lib/vhost/vhost.o 00:02:45.450 CC lib/iscsi/tgt_node.o 00:02:45.450 CC lib/iscsi/iscsi_subsystem.o 00:02:45.450 CC lib/vhost/vhost_rpc.o 00:02:45.450 CC lib/ftl/utils/ftl_mempool.o 00:02:45.450 CC lib/iscsi/iscsi_rpc.o 00:02:45.710 CC lib/ftl/utils/ftl_bitmap.o 00:02:45.710 CC lib/iscsi/task.o 00:02:45.710 CC lib/vhost/vhost_scsi.o 00:02:45.710 CC lib/vhost/vhost_blk.o 00:02:45.710 CC lib/vhost/rte_vhost_user.o 00:02:45.710 CC lib/ftl/utils/ftl_property.o 00:02:45.710 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:45.710 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:45.969 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:45.969 LIB libspdk_nvmf.a 00:02:45.969 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:45.969 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:45.969 SO libspdk_nvmf.so.20.0 00:02:45.969 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:45.969 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:45.969 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:45.969 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:45.969 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:46.228 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:46.228 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:46.228 SYMLINK libspdk_nvmf.so 00:02:46.228 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:46.228 CC lib/ftl/base/ftl_base_dev.o 00:02:46.228 CC lib/ftl/base/ftl_base_bdev.o 00:02:46.228 CC lib/ftl/ftl_trace.o 00:02:46.486 LIB libspdk_ftl.a 00:02:46.486 LIB libspdk_vhost.a 00:02:46.486 SO libspdk_ftl.so.9.0 00:02:46.744 SO libspdk_vhost.so.8.0 00:02:46.744 SYMLINK libspdk_vhost.so 00:02:46.744 LIB libspdk_iscsi.a 00:02:46.744 SYMLINK libspdk_ftl.so 00:02:46.744 SO libspdk_iscsi.so.8.0 00:02:47.007 SYMLINK libspdk_iscsi.so 00:02:47.265 CC module/env_dpdk/env_dpdk_rpc.o 00:02:47.265 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:47.265 CC module/fsdev/aio/fsdev_aio.o 00:02:47.265 CC module/sock/posix/posix.o 00:02:47.265 CC module/accel/error/accel_error.o 00:02:47.265 CC module/keyring/linux/keyring.o 00:02:47.265 CC module/blob/bdev/blob_bdev.o 00:02:47.265 CC module/keyring/file/keyring.o 00:02:47.265 CC module/scheduler/gscheduler/gscheduler.o 00:02:47.265 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:47.265 LIB libspdk_env_dpdk_rpc.a 00:02:47.265 SO libspdk_env_dpdk_rpc.so.6.0 00:02:47.265 CC module/keyring/file/keyring_rpc.o 00:02:47.265 CC module/keyring/linux/keyring_rpc.o 00:02:47.265 SYMLINK libspdk_env_dpdk_rpc.so 00:02:47.265 CC module/accel/error/accel_error_rpc.o 00:02:47.265 LIB libspdk_scheduler_gscheduler.a 00:02:47.265 LIB libspdk_scheduler_dpdk_governor.a 00:02:47.265 SO libspdk_scheduler_gscheduler.so.4.0 00:02:47.265 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:47.265 LIB libspdk_scheduler_dynamic.a 00:02:47.523 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:47.523 SO libspdk_scheduler_dynamic.so.4.0 00:02:47.523 LIB libspdk_keyring_file.a 00:02:47.523 SYMLINK libspdk_scheduler_gscheduler.so 00:02:47.523 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:47.523 SO libspdk_keyring_file.so.2.0 00:02:47.523 SYMLINK libspdk_scheduler_dynamic.so 00:02:47.523 LIB libspdk_keyring_linux.a 00:02:47.523 LIB libspdk_accel_error.a 00:02:47.523 LIB libspdk_blob_bdev.a 00:02:47.523 SYMLINK libspdk_keyring_file.so 00:02:47.523 SO libspdk_keyring_linux.so.1.0 00:02:47.523 CC module/fsdev/aio/linux_aio_mgr.o 00:02:47.523 SO libspdk_accel_error.so.2.0 00:02:47.523 SO libspdk_blob_bdev.so.11.0 00:02:47.523 SYMLINK libspdk_keyring_linux.so 00:02:47.523 SYMLINK libspdk_accel_error.so 00:02:47.523 SYMLINK libspdk_blob_bdev.so 00:02:47.523 CC module/accel/ioat/accel_ioat.o 00:02:47.523 CC module/accel/ioat/accel_ioat_rpc.o 00:02:47.523 CC module/accel/dsa/accel_dsa.o 00:02:47.523 CC module/accel/dsa/accel_dsa_rpc.o 00:02:47.523 CC module/accel/iaa/accel_iaa.o 00:02:47.780 CC module/accel/iaa/accel_iaa_rpc.o 00:02:47.781 LIB libspdk_fsdev_aio.a 00:02:47.781 LIB libspdk_accel_ioat.a 00:02:47.781 CC module/bdev/delay/vbdev_delay.o 00:02:47.781 CC module/blobfs/bdev/blobfs_bdev.o 00:02:47.781 CC module/bdev/error/vbdev_error.o 00:02:47.781 SO libspdk_accel_ioat.so.6.0 00:02:47.781 SO libspdk_fsdev_aio.so.1.0 00:02:47.781 LIB libspdk_accel_iaa.a 00:02:47.781 CC module/bdev/gpt/gpt.o 00:02:47.781 SO libspdk_accel_iaa.so.3.0 00:02:47.781 LIB libspdk_accel_dsa.a 00:02:47.781 SYMLINK libspdk_accel_ioat.so 00:02:47.781 SO libspdk_accel_dsa.so.5.0 00:02:47.781 SYMLINK libspdk_fsdev_aio.so 00:02:47.781 CC module/bdev/error/vbdev_error_rpc.o 00:02:47.781 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:47.781 SYMLINK libspdk_accel_iaa.so 00:02:48.037 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:48.037 SYMLINK libspdk_accel_dsa.so 00:02:48.037 CC module/bdev/lvol/vbdev_lvol.o 00:02:48.037 LIB libspdk_sock_posix.a 00:02:48.037 SO libspdk_sock_posix.so.6.0 00:02:48.037 CC module/bdev/gpt/vbdev_gpt.o 00:02:48.037 CC module/bdev/malloc/bdev_malloc.o 00:02:48.037 LIB libspdk_bdev_error.a 00:02:48.037 SYMLINK libspdk_sock_posix.so 00:02:48.037 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:48.037 CC module/bdev/null/bdev_null.o 00:02:48.037 SO libspdk_bdev_error.so.6.0 00:02:48.037 LIB libspdk_blobfs_bdev.a 00:02:48.037 SYMLINK libspdk_bdev_error.so 00:02:48.037 SO libspdk_blobfs_bdev.so.6.0 00:02:48.037 CC module/bdev/null/bdev_null_rpc.o 00:02:48.037 LIB libspdk_bdev_delay.a 00:02:48.037 CC module/bdev/nvme/bdev_nvme.o 00:02:48.037 SYMLINK libspdk_blobfs_bdev.so 00:02:48.037 SO libspdk_bdev_delay.so.6.0 00:02:48.037 CC module/bdev/passthru/vbdev_passthru.o 00:02:48.293 SYMLINK libspdk_bdev_delay.so 00:02:48.293 LIB libspdk_bdev_gpt.a 00:02:48.293 LIB libspdk_bdev_null.a 00:02:48.293 SO libspdk_bdev_gpt.so.6.0 00:02:48.293 CC module/bdev/raid/bdev_raid.o 00:02:48.293 CC module/bdev/split/vbdev_split.o 00:02:48.293 SO libspdk_bdev_null.so.6.0 00:02:48.293 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:48.293 SYMLINK libspdk_bdev_gpt.so 00:02:48.293 CC module/bdev/split/vbdev_split_rpc.o 00:02:48.293 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:48.293 SYMLINK libspdk_bdev_null.so 00:02:48.293 CC module/bdev/raid/bdev_raid_rpc.o 00:02:48.293 CC module/bdev/xnvme/bdev_xnvme.o 00:02:48.293 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:48.293 LIB libspdk_bdev_malloc.a 00:02:48.293 SO libspdk_bdev_malloc.so.6.0 00:02:48.550 CC module/bdev/raid/bdev_raid_sb.o 00:02:48.550 LIB libspdk_bdev_split.a 00:02:48.550 SYMLINK libspdk_bdev_malloc.so 00:02:48.550 CC module/bdev/raid/raid0.o 00:02:48.550 SO libspdk_bdev_split.so.6.0 00:02:48.550 LIB libspdk_bdev_passthru.a 00:02:48.550 SO libspdk_bdev_passthru.so.6.0 00:02:48.550 SYMLINK libspdk_bdev_split.so 00:02:48.550 SYMLINK libspdk_bdev_passthru.so 00:02:48.550 LIB libspdk_bdev_lvol.a 00:02:48.550 SO libspdk_bdev_lvol.so.6.0 00:02:48.550 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:48.550 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:48.550 CC module/bdev/aio/bdev_aio.o 00:02:48.841 CC module/bdev/ftl/bdev_ftl.o 00:02:48.841 SYMLINK libspdk_bdev_lvol.so 00:02:48.841 CC module/bdev/aio/bdev_aio_rpc.o 00:02:48.841 CC module/bdev/iscsi/bdev_iscsi.o 00:02:48.841 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:48.841 LIB libspdk_bdev_zone_block.a 00:02:48.841 SO libspdk_bdev_zone_block.so.6.0 00:02:48.841 LIB libspdk_bdev_xnvme.a 00:02:48.841 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:48.841 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:48.841 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:48.841 SO libspdk_bdev_xnvme.so.3.0 00:02:48.841 SYMLINK libspdk_bdev_zone_block.so 00:02:48.841 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:48.841 SYMLINK libspdk_bdev_xnvme.so 00:02:48.841 CC module/bdev/raid/raid1.o 00:02:48.841 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:49.120 LIB libspdk_bdev_aio.a 00:02:49.120 SO libspdk_bdev_aio.so.6.0 00:02:49.120 CC module/bdev/raid/concat.o 00:02:49.120 LIB libspdk_bdev_iscsi.a 00:02:49.120 SO libspdk_bdev_iscsi.so.6.0 00:02:49.120 LIB libspdk_bdev_ftl.a 00:02:49.120 SYMLINK libspdk_bdev_aio.so 00:02:49.120 CC module/bdev/nvme/nvme_rpc.o 00:02:49.120 SO libspdk_bdev_ftl.so.6.0 00:02:49.120 CC module/bdev/nvme/bdev_mdns_client.o 00:02:49.120 SYMLINK libspdk_bdev_iscsi.so 00:02:49.120 CC module/bdev/nvme/vbdev_opal.o 00:02:49.120 SYMLINK libspdk_bdev_ftl.so 00:02:49.120 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:49.120 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:49.120 LIB libspdk_bdev_raid.a 00:02:49.120 LIB libspdk_bdev_virtio.a 00:02:49.120 SO libspdk_bdev_virtio.so.6.0 00:02:49.376 SO libspdk_bdev_raid.so.6.0 00:02:49.376 SYMLINK libspdk_bdev_virtio.so 00:02:49.376 SYMLINK libspdk_bdev_raid.so 00:02:50.309 LIB libspdk_bdev_nvme.a 00:02:50.567 SO libspdk_bdev_nvme.so.7.1 00:02:50.567 SYMLINK libspdk_bdev_nvme.so 00:02:50.825 CC module/event/subsystems/sock/sock.o 00:02:50.825 CC module/event/subsystems/keyring/keyring.o 00:02:50.825 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:50.825 CC module/event/subsystems/vmd/vmd.o 00:02:50.825 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:50.825 CC module/event/subsystems/fsdev/fsdev.o 00:02:50.826 CC module/event/subsystems/iobuf/iobuf.o 00:02:50.826 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:50.826 CC module/event/subsystems/scheduler/scheduler.o 00:02:51.083 LIB libspdk_event_vhost_blk.a 00:02:51.083 LIB libspdk_event_keyring.a 00:02:51.083 SO libspdk_event_vhost_blk.so.3.0 00:02:51.083 SO libspdk_event_keyring.so.1.0 00:02:51.083 LIB libspdk_event_scheduler.a 00:02:51.083 LIB libspdk_event_vmd.a 00:02:51.083 LIB libspdk_event_fsdev.a 00:02:51.083 SO libspdk_event_scheduler.so.4.0 00:02:51.083 LIB libspdk_event_iobuf.a 00:02:51.083 SO libspdk_event_vmd.so.6.0 00:02:51.083 SO libspdk_event_fsdev.so.1.0 00:02:51.083 SYMLINK libspdk_event_vhost_blk.so 00:02:51.083 LIB libspdk_event_sock.a 00:02:51.083 SYMLINK libspdk_event_keyring.so 00:02:51.083 SO libspdk_event_iobuf.so.3.0 00:02:51.083 SYMLINK libspdk_event_scheduler.so 00:02:51.083 SO libspdk_event_sock.so.5.0 00:02:51.083 SYMLINK libspdk_event_vmd.so 00:02:51.083 SYMLINK libspdk_event_fsdev.so 00:02:51.083 SYMLINK libspdk_event_iobuf.so 00:02:51.083 SYMLINK libspdk_event_sock.so 00:02:51.341 CC module/event/subsystems/accel/accel.o 00:02:51.600 LIB libspdk_event_accel.a 00:02:51.600 SO libspdk_event_accel.so.6.0 00:02:51.600 SYMLINK libspdk_event_accel.so 00:02:51.858 CC module/event/subsystems/bdev/bdev.o 00:02:51.858 LIB libspdk_event_bdev.a 00:02:51.858 SO libspdk_event_bdev.so.6.0 00:02:51.858 SYMLINK libspdk_event_bdev.so 00:02:52.115 CC module/event/subsystems/scsi/scsi.o 00:02:52.115 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:52.115 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:52.115 CC module/event/subsystems/nbd/nbd.o 00:02:52.115 CC module/event/subsystems/ublk/ublk.o 00:02:52.115 LIB libspdk_event_nbd.a 00:02:52.115 LIB libspdk_event_scsi.a 00:02:52.115 LIB libspdk_event_ublk.a 00:02:52.373 SO libspdk_event_nbd.so.6.0 00:02:52.373 SO libspdk_event_scsi.so.6.0 00:02:52.373 SO libspdk_event_ublk.so.3.0 00:02:52.373 SYMLINK libspdk_event_nbd.so 00:02:52.373 SYMLINK libspdk_event_ublk.so 00:02:52.373 SYMLINK libspdk_event_scsi.so 00:02:52.373 LIB libspdk_event_nvmf.a 00:02:52.373 SO libspdk_event_nvmf.so.6.0 00:02:52.373 SYMLINK libspdk_event_nvmf.so 00:02:52.373 CC module/event/subsystems/iscsi/iscsi.o 00:02:52.373 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:52.630 LIB libspdk_event_vhost_scsi.a 00:02:52.630 LIB libspdk_event_iscsi.a 00:02:52.630 SO libspdk_event_vhost_scsi.so.3.0 00:02:52.630 SO libspdk_event_iscsi.so.6.0 00:02:52.630 SYMLINK libspdk_event_vhost_scsi.so 00:02:52.630 SYMLINK libspdk_event_iscsi.so 00:02:52.888 SO libspdk.so.6.0 00:02:52.888 SYMLINK libspdk.so 00:02:52.888 CXX app/trace/trace.o 00:02:52.888 CC test/rpc_client/rpc_client_test.o 00:02:52.888 TEST_HEADER include/spdk/accel.h 00:02:52.888 TEST_HEADER include/spdk/accel_module.h 00:02:52.888 TEST_HEADER include/spdk/assert.h 00:02:52.888 TEST_HEADER include/spdk/barrier.h 00:02:52.888 TEST_HEADER include/spdk/base64.h 00:02:52.888 TEST_HEADER include/spdk/bdev.h 00:02:52.888 TEST_HEADER include/spdk/bdev_module.h 00:02:52.888 TEST_HEADER include/spdk/bdev_zone.h 00:02:52.888 TEST_HEADER include/spdk/bit_array.h 00:02:52.888 TEST_HEADER include/spdk/bit_pool.h 00:02:52.888 TEST_HEADER include/spdk/blob_bdev.h 00:02:52.888 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:52.888 TEST_HEADER include/spdk/blobfs.h 00:02:52.888 TEST_HEADER include/spdk/blob.h 00:02:52.888 TEST_HEADER include/spdk/conf.h 00:02:52.888 TEST_HEADER include/spdk/config.h 00:02:52.888 TEST_HEADER include/spdk/cpuset.h 00:02:52.888 TEST_HEADER include/spdk/crc16.h 00:02:52.888 TEST_HEADER include/spdk/crc32.h 00:02:52.888 TEST_HEADER include/spdk/crc64.h 00:02:52.888 TEST_HEADER include/spdk/dif.h 00:02:52.888 TEST_HEADER include/spdk/dma.h 00:02:52.888 TEST_HEADER include/spdk/endian.h 00:02:52.888 TEST_HEADER include/spdk/env_dpdk.h 00:02:52.888 TEST_HEADER include/spdk/env.h 00:02:52.888 TEST_HEADER include/spdk/event.h 00:02:52.888 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:53.145 TEST_HEADER include/spdk/fd_group.h 00:02:53.145 TEST_HEADER include/spdk/fd.h 00:02:53.145 TEST_HEADER include/spdk/file.h 00:02:53.146 TEST_HEADER include/spdk/fsdev.h 00:02:53.146 TEST_HEADER include/spdk/fsdev_module.h 00:02:53.146 TEST_HEADER include/spdk/ftl.h 00:02:53.146 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:53.146 TEST_HEADER include/spdk/gpt_spec.h 00:02:53.146 TEST_HEADER include/spdk/hexlify.h 00:02:53.146 TEST_HEADER include/spdk/histogram_data.h 00:02:53.146 TEST_HEADER include/spdk/idxd.h 00:02:53.146 TEST_HEADER include/spdk/idxd_spec.h 00:02:53.146 TEST_HEADER include/spdk/init.h 00:02:53.146 TEST_HEADER include/spdk/ioat.h 00:02:53.146 CC examples/ioat/perf/perf.o 00:02:53.146 CC test/thread/poller_perf/poller_perf.o 00:02:53.146 TEST_HEADER include/spdk/ioat_spec.h 00:02:53.146 TEST_HEADER include/spdk/iscsi_spec.h 00:02:53.146 CC examples/util/zipf/zipf.o 00:02:53.146 TEST_HEADER include/spdk/json.h 00:02:53.146 TEST_HEADER include/spdk/jsonrpc.h 00:02:53.146 TEST_HEADER include/spdk/keyring.h 00:02:53.146 TEST_HEADER include/spdk/keyring_module.h 00:02:53.146 TEST_HEADER include/spdk/likely.h 00:02:53.146 TEST_HEADER include/spdk/log.h 00:02:53.146 TEST_HEADER include/spdk/lvol.h 00:02:53.146 TEST_HEADER include/spdk/md5.h 00:02:53.146 TEST_HEADER include/spdk/memory.h 00:02:53.146 TEST_HEADER include/spdk/mmio.h 00:02:53.146 TEST_HEADER include/spdk/nbd.h 00:02:53.146 TEST_HEADER include/spdk/net.h 00:02:53.146 TEST_HEADER include/spdk/notify.h 00:02:53.146 TEST_HEADER include/spdk/nvme.h 00:02:53.146 CC test/dma/test_dma/test_dma.o 00:02:53.146 TEST_HEADER include/spdk/nvme_intel.h 00:02:53.146 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:53.146 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:53.146 TEST_HEADER include/spdk/nvme_spec.h 00:02:53.146 TEST_HEADER include/spdk/nvme_zns.h 00:02:53.146 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:53.146 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:53.146 TEST_HEADER include/spdk/nvmf.h 00:02:53.146 TEST_HEADER include/spdk/nvmf_spec.h 00:02:53.146 TEST_HEADER include/spdk/nvmf_transport.h 00:02:53.146 TEST_HEADER include/spdk/opal.h 00:02:53.146 TEST_HEADER include/spdk/opal_spec.h 00:02:53.146 TEST_HEADER include/spdk/pci_ids.h 00:02:53.146 TEST_HEADER include/spdk/pipe.h 00:02:53.146 TEST_HEADER include/spdk/queue.h 00:02:53.146 TEST_HEADER include/spdk/reduce.h 00:02:53.146 CC test/app/bdev_svc/bdev_svc.o 00:02:53.146 TEST_HEADER include/spdk/rpc.h 00:02:53.146 TEST_HEADER include/spdk/scheduler.h 00:02:53.146 TEST_HEADER include/spdk/scsi.h 00:02:53.146 TEST_HEADER include/spdk/scsi_spec.h 00:02:53.146 TEST_HEADER include/spdk/sock.h 00:02:53.146 TEST_HEADER include/spdk/stdinc.h 00:02:53.146 TEST_HEADER include/spdk/string.h 00:02:53.146 TEST_HEADER include/spdk/thread.h 00:02:53.146 LINK rpc_client_test 00:02:53.146 TEST_HEADER include/spdk/trace.h 00:02:53.146 TEST_HEADER include/spdk/trace_parser.h 00:02:53.146 TEST_HEADER include/spdk/tree.h 00:02:53.146 TEST_HEADER include/spdk/ublk.h 00:02:53.146 CC test/env/mem_callbacks/mem_callbacks.o 00:02:53.146 TEST_HEADER include/spdk/util.h 00:02:53.146 TEST_HEADER include/spdk/uuid.h 00:02:53.146 TEST_HEADER include/spdk/version.h 00:02:53.146 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:53.146 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:53.146 TEST_HEADER include/spdk/vhost.h 00:02:53.146 TEST_HEADER include/spdk/vmd.h 00:02:53.146 TEST_HEADER include/spdk/xor.h 00:02:53.146 TEST_HEADER include/spdk/zipf.h 00:02:53.146 CXX test/cpp_headers/accel.o 00:02:53.146 LINK poller_perf 00:02:53.146 LINK zipf 00:02:53.146 LINK interrupt_tgt 00:02:53.146 LINK ioat_perf 00:02:53.146 CC app/trace_record/trace_record.o 00:02:53.146 LINK bdev_svc 00:02:53.146 CXX test/cpp_headers/accel_module.o 00:02:53.403 LINK spdk_trace 00:02:53.403 CC examples/ioat/verify/verify.o 00:02:53.403 CC test/env/vtophys/vtophys.o 00:02:53.403 CXX test/cpp_headers/assert.o 00:02:53.403 CC test/event/event_perf/event_perf.o 00:02:53.403 CXX test/cpp_headers/barrier.o 00:02:53.403 CC app/nvmf_tgt/nvmf_main.o 00:02:53.403 LINK spdk_trace_record 00:02:53.403 LINK vtophys 00:02:53.660 LINK test_dma 00:02:53.660 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:53.660 LINK verify 00:02:53.660 LINK event_perf 00:02:53.660 LINK nvmf_tgt 00:02:53.660 CXX test/cpp_headers/base64.o 00:02:53.660 LINK mem_callbacks 00:02:53.660 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:53.660 CC test/env/memory/memory_ut.o 00:02:53.660 CC test/event/reactor/reactor.o 00:02:53.660 CXX test/cpp_headers/bdev.o 00:02:53.660 CC test/app/histogram_perf/histogram_perf.o 00:02:53.660 LINK env_dpdk_post_init 00:02:53.917 CC test/env/pci/pci_ut.o 00:02:53.917 LINK reactor 00:02:53.917 CC app/iscsi_tgt/iscsi_tgt.o 00:02:53.917 CC examples/sock/hello_world/hello_sock.o 00:02:53.917 CC examples/thread/thread/thread_ex.o 00:02:53.918 LINK histogram_perf 00:02:53.918 CXX test/cpp_headers/bdev_module.o 00:02:53.918 LINK nvme_fuzz 00:02:53.918 CC test/event/reactor_perf/reactor_perf.o 00:02:53.918 LINK iscsi_tgt 00:02:54.175 CXX test/cpp_headers/bdev_zone.o 00:02:54.175 LINK hello_sock 00:02:54.175 LINK thread 00:02:54.175 LINK reactor_perf 00:02:54.175 CC app/spdk_tgt/spdk_tgt.o 00:02:54.175 CC examples/vmd/lsvmd/lsvmd.o 00:02:54.175 LINK pci_ut 00:02:54.175 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:54.175 CXX test/cpp_headers/bit_array.o 00:02:54.175 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:54.175 LINK lsvmd 00:02:54.432 CC test/event/app_repeat/app_repeat.o 00:02:54.432 LINK spdk_tgt 00:02:54.432 CC test/event/scheduler/scheduler.o 00:02:54.432 CXX test/cpp_headers/bit_pool.o 00:02:54.432 CC test/accel/dif/dif.o 00:02:54.432 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:54.432 LINK app_repeat 00:02:54.432 CC examples/vmd/led/led.o 00:02:54.432 CXX test/cpp_headers/blob_bdev.o 00:02:54.432 CC test/blobfs/mkfs/mkfs.o 00:02:54.432 LINK scheduler 00:02:54.689 CC app/spdk_lspci/spdk_lspci.o 00:02:54.689 LINK led 00:02:54.689 CXX test/cpp_headers/blobfs_bdev.o 00:02:54.690 LINK spdk_lspci 00:02:54.690 LINK memory_ut 00:02:54.690 CC test/lvol/esnap/esnap.o 00:02:54.690 LINK mkfs 00:02:54.690 CXX test/cpp_headers/blobfs.o 00:02:54.690 LINK vhost_fuzz 00:02:54.948 CC test/nvme/aer/aer.o 00:02:54.948 CC examples/idxd/perf/perf.o 00:02:54.948 CC app/spdk_nvme_perf/perf.o 00:02:54.948 CXX test/cpp_headers/blob.o 00:02:54.948 CXX test/cpp_headers/conf.o 00:02:54.948 CC app/spdk_nvme_identify/identify.o 00:02:54.948 CXX test/cpp_headers/config.o 00:02:54.948 LINK dif 00:02:54.948 CXX test/cpp_headers/cpuset.o 00:02:55.205 CXX test/cpp_headers/crc16.o 00:02:55.205 CC app/spdk_nvme_discover/discovery_aer.o 00:02:55.205 LINK aer 00:02:55.205 CC test/nvme/reset/reset.o 00:02:55.205 LINK idxd_perf 00:02:55.205 CXX test/cpp_headers/crc32.o 00:02:55.205 CC test/bdev/bdevio/bdevio.o 00:02:55.205 LINK spdk_nvme_discover 00:02:55.205 CXX test/cpp_headers/crc64.o 00:02:55.205 CC app/spdk_top/spdk_top.o 00:02:55.463 LINK reset 00:02:55.463 CXX test/cpp_headers/dif.o 00:02:55.463 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:55.463 CC app/vhost/vhost.o 00:02:55.463 CXX test/cpp_headers/dma.o 00:02:55.463 CC test/nvme/sgl/sgl.o 00:02:55.722 LINK bdevio 00:02:55.722 LINK vhost 00:02:55.722 CXX test/cpp_headers/endian.o 00:02:55.722 LINK spdk_nvme_perf 00:02:55.722 LINK iscsi_fuzz 00:02:55.722 LINK spdk_nvme_identify 00:02:55.722 LINK hello_fsdev 00:02:55.722 LINK sgl 00:02:55.722 CXX test/cpp_headers/env_dpdk.o 00:02:55.979 CC app/spdk_dd/spdk_dd.o 00:02:55.979 CXX test/cpp_headers/env.o 00:02:55.979 CC test/app/jsoncat/jsoncat.o 00:02:55.979 CC app/fio/nvme/fio_plugin.o 00:02:55.979 CC test/nvme/e2edp/nvme_dp.o 00:02:55.979 CXX test/cpp_headers/event.o 00:02:55.979 CC examples/accel/perf/accel_perf.o 00:02:55.979 CC app/fio/bdev/fio_plugin.o 00:02:55.979 LINK jsoncat 00:02:55.979 CC test/nvme/overhead/overhead.o 00:02:56.270 LINK spdk_dd 00:02:56.270 CXX test/cpp_headers/fd_group.o 00:02:56.270 LINK nvme_dp 00:02:56.270 CC test/app/stub/stub.o 00:02:56.270 CXX test/cpp_headers/fd.o 00:02:56.270 LINK spdk_top 00:02:56.270 LINK overhead 00:02:56.270 CXX test/cpp_headers/file.o 00:02:56.270 CC test/nvme/err_injection/err_injection.o 00:02:56.530 LINK stub 00:02:56.530 LINK accel_perf 00:02:56.530 CC examples/blob/hello_world/hello_blob.o 00:02:56.530 CXX test/cpp_headers/fsdev.o 00:02:56.530 LINK spdk_nvme 00:02:56.530 CC examples/blob/cli/blobcli.o 00:02:56.530 LINK err_injection 00:02:56.530 CC test/nvme/startup/startup.o 00:02:56.530 LINK spdk_bdev 00:02:56.530 CC test/nvme/reserve/reserve.o 00:02:56.530 CXX test/cpp_headers/fsdev_module.o 00:02:56.530 CC examples/nvme/hello_world/hello_world.o 00:02:56.530 LINK hello_blob 00:02:56.530 CC test/nvme/simple_copy/simple_copy.o 00:02:56.787 CC examples/nvme/reconnect/reconnect.o 00:02:56.787 LINK startup 00:02:56.787 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:56.787 LINK reserve 00:02:56.787 CXX test/cpp_headers/ftl.o 00:02:56.787 CXX test/cpp_headers/fuse_dispatcher.o 00:02:56.787 CC examples/nvme/arbitration/arbitration.o 00:02:56.787 CXX test/cpp_headers/gpt_spec.o 00:02:56.787 LINK hello_world 00:02:56.787 LINK simple_copy 00:02:57.044 CXX test/cpp_headers/hexlify.o 00:02:57.044 CC examples/nvme/hotplug/hotplug.o 00:02:57.044 CXX test/cpp_headers/histogram_data.o 00:02:57.044 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:57.044 LINK blobcli 00:02:57.044 LINK reconnect 00:02:57.044 CC test/nvme/connect_stress/connect_stress.o 00:02:57.044 CXX test/cpp_headers/idxd.o 00:02:57.044 CXX test/cpp_headers/idxd_spec.o 00:02:57.044 LINK arbitration 00:02:57.044 LINK cmb_copy 00:02:57.303 LINK nvme_manage 00:02:57.303 LINK connect_stress 00:02:57.303 CC examples/bdev/hello_world/hello_bdev.o 00:02:57.303 LINK hotplug 00:02:57.303 CXX test/cpp_headers/init.o 00:02:57.303 CC examples/bdev/bdevperf/bdevperf.o 00:02:57.303 CXX test/cpp_headers/ioat.o 00:02:57.303 CC examples/nvme/abort/abort.o 00:02:57.303 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:57.303 CXX test/cpp_headers/ioat_spec.o 00:02:57.303 CC test/nvme/boot_partition/boot_partition.o 00:02:57.303 LINK hello_bdev 00:02:57.303 CXX test/cpp_headers/iscsi_spec.o 00:02:57.303 CXX test/cpp_headers/json.o 00:02:57.303 CC test/nvme/compliance/nvme_compliance.o 00:02:57.303 LINK pmr_persistence 00:02:57.560 CXX test/cpp_headers/jsonrpc.o 00:02:57.560 LINK boot_partition 00:02:57.560 CXX test/cpp_headers/keyring.o 00:02:57.560 CXX test/cpp_headers/keyring_module.o 00:02:57.560 CXX test/cpp_headers/likely.o 00:02:57.560 CXX test/cpp_headers/log.o 00:02:57.560 CXX test/cpp_headers/lvol.o 00:02:57.560 CXX test/cpp_headers/md5.o 00:02:57.560 LINK abort 00:02:57.560 CC test/nvme/fused_ordering/fused_ordering.o 00:02:57.560 CXX test/cpp_headers/memory.o 00:02:57.560 LINK nvme_compliance 00:02:57.560 CXX test/cpp_headers/mmio.o 00:02:57.560 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:57.818 CXX test/cpp_headers/nbd.o 00:02:57.818 CXX test/cpp_headers/net.o 00:02:57.818 CXX test/cpp_headers/notify.o 00:02:57.818 LINK fused_ordering 00:02:57.818 CXX test/cpp_headers/nvme.o 00:02:57.818 CXX test/cpp_headers/nvme_intel.o 00:02:57.818 CXX test/cpp_headers/nvme_ocssd.o 00:02:57.818 LINK doorbell_aers 00:02:57.818 CC test/nvme/fdp/fdp.o 00:02:57.818 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:57.818 CXX test/cpp_headers/nvme_spec.o 00:02:57.818 LINK bdevperf 00:02:57.818 CXX test/cpp_headers/nvme_zns.o 00:02:57.818 CXX test/cpp_headers/nvmf_cmd.o 00:02:57.818 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:57.818 CC test/nvme/cuse/cuse.o 00:02:57.818 CXX test/cpp_headers/nvmf.o 00:02:58.077 CXX test/cpp_headers/nvmf_spec.o 00:02:58.077 CXX test/cpp_headers/nvmf_transport.o 00:02:58.077 CXX test/cpp_headers/opal.o 00:02:58.077 CXX test/cpp_headers/opal_spec.o 00:02:58.077 CXX test/cpp_headers/pci_ids.o 00:02:58.077 CXX test/cpp_headers/pipe.o 00:02:58.077 CXX test/cpp_headers/queue.o 00:02:58.077 CXX test/cpp_headers/reduce.o 00:02:58.077 LINK fdp 00:02:58.077 CXX test/cpp_headers/rpc.o 00:02:58.077 CXX test/cpp_headers/scheduler.o 00:02:58.077 CXX test/cpp_headers/scsi.o 00:02:58.077 CC examples/nvmf/nvmf/nvmf.o 00:02:58.333 CXX test/cpp_headers/scsi_spec.o 00:02:58.333 CXX test/cpp_headers/sock.o 00:02:58.333 CXX test/cpp_headers/stdinc.o 00:02:58.333 CXX test/cpp_headers/string.o 00:02:58.333 CXX test/cpp_headers/thread.o 00:02:58.334 CXX test/cpp_headers/trace.o 00:02:58.334 CXX test/cpp_headers/trace_parser.o 00:02:58.334 CXX test/cpp_headers/tree.o 00:02:58.334 CXX test/cpp_headers/ublk.o 00:02:58.334 CXX test/cpp_headers/util.o 00:02:58.334 CXX test/cpp_headers/uuid.o 00:02:58.334 CXX test/cpp_headers/version.o 00:02:58.334 CXX test/cpp_headers/vfio_user_pci.o 00:02:58.334 CXX test/cpp_headers/vfio_user_spec.o 00:02:58.334 LINK nvmf 00:02:58.334 CXX test/cpp_headers/vhost.o 00:02:58.334 CXX test/cpp_headers/vmd.o 00:02:58.590 CXX test/cpp_headers/xor.o 00:02:58.590 CXX test/cpp_headers/zipf.o 00:02:58.848 LINK esnap 00:02:58.848 LINK cuse 00:02:59.106 00:02:59.106 real 1m2.034s 00:02:59.106 user 5m56.573s 00:02:59.106 sys 1m1.750s 00:02:59.106 17:06:42 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:59.106 ************************************ 00:02:59.106 17:06:42 make -- common/autotest_common.sh@10 -- $ set +x 00:02:59.106 END TEST make 00:02:59.106 ************************************ 00:02:59.107 17:06:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:59.107 17:06:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:59.107 17:06:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:59.107 17:06:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.107 17:06:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:02:59.107 17:06:42 -- pm/common@44 -- $ pid=5073 00:02:59.107 17:06:42 -- pm/common@50 -- $ kill -TERM 5073 00:02:59.107 17:06:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.107 17:06:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:02:59.107 17:06:42 -- pm/common@44 -- $ pid=5074 00:02:59.107 17:06:42 -- pm/common@50 -- $ kill -TERM 5074 00:02:59.107 17:06:42 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:59.107 17:06:42 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:59.365 17:06:42 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:59.365 17:06:42 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:59.365 17:06:42 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:59.365 17:06:42 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:59.365 17:06:42 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:59.365 17:06:42 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:59.365 17:06:42 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:59.365 17:06:42 -- scripts/common.sh@336 -- # IFS=.-: 00:02:59.365 17:06:42 -- scripts/common.sh@336 -- # read -ra ver1 00:02:59.365 17:06:42 -- scripts/common.sh@337 -- # IFS=.-: 00:02:59.365 17:06:42 -- scripts/common.sh@337 -- # read -ra ver2 00:02:59.365 17:06:42 -- scripts/common.sh@338 -- # local 'op=<' 00:02:59.365 17:06:42 -- scripts/common.sh@340 -- # ver1_l=2 00:02:59.365 17:06:42 -- scripts/common.sh@341 -- # ver2_l=1 00:02:59.365 17:06:42 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:59.365 17:06:42 -- scripts/common.sh@344 -- # case "$op" in 00:02:59.365 17:06:42 -- scripts/common.sh@345 -- # : 1 00:02:59.365 17:06:42 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:59.365 17:06:42 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:59.365 17:06:42 -- scripts/common.sh@365 -- # decimal 1 00:02:59.365 17:06:42 -- scripts/common.sh@353 -- # local d=1 00:02:59.365 17:06:42 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:59.365 17:06:42 -- scripts/common.sh@355 -- # echo 1 00:02:59.365 17:06:42 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:59.365 17:06:42 -- scripts/common.sh@366 -- # decimal 2 00:02:59.365 17:06:42 -- scripts/common.sh@353 -- # local d=2 00:02:59.365 17:06:42 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:59.365 17:06:42 -- scripts/common.sh@355 -- # echo 2 00:02:59.365 17:06:42 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:59.365 17:06:42 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:59.365 17:06:42 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:59.365 17:06:42 -- scripts/common.sh@368 -- # return 0 00:02:59.365 17:06:42 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:59.365 17:06:42 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:59.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.365 --rc genhtml_branch_coverage=1 00:02:59.365 --rc genhtml_function_coverage=1 00:02:59.365 --rc genhtml_legend=1 00:02:59.365 --rc geninfo_all_blocks=1 00:02:59.365 --rc geninfo_unexecuted_blocks=1 00:02:59.365 00:02:59.365 ' 00:02:59.365 17:06:42 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:59.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.365 --rc genhtml_branch_coverage=1 00:02:59.365 --rc genhtml_function_coverage=1 00:02:59.365 --rc genhtml_legend=1 00:02:59.365 --rc geninfo_all_blocks=1 00:02:59.365 --rc geninfo_unexecuted_blocks=1 00:02:59.365 00:02:59.365 ' 00:02:59.365 17:06:42 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:59.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.365 --rc genhtml_branch_coverage=1 00:02:59.365 --rc genhtml_function_coverage=1 00:02:59.365 --rc genhtml_legend=1 00:02:59.365 --rc geninfo_all_blocks=1 00:02:59.365 --rc geninfo_unexecuted_blocks=1 00:02:59.365 00:02:59.365 ' 00:02:59.365 17:06:42 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:59.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.365 --rc genhtml_branch_coverage=1 00:02:59.365 --rc genhtml_function_coverage=1 00:02:59.365 --rc genhtml_legend=1 00:02:59.365 --rc geninfo_all_blocks=1 00:02:59.365 --rc geninfo_unexecuted_blocks=1 00:02:59.365 00:02:59.365 ' 00:02:59.365 17:06:42 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:59.365 17:06:42 -- nvmf/common.sh@7 -- # uname -s 00:02:59.365 17:06:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:59.365 17:06:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:59.365 17:06:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:59.365 17:06:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:59.365 17:06:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:59.365 17:06:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:59.365 17:06:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:59.365 17:06:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:59.365 17:06:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:59.365 17:06:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:59.365 17:06:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b3233cb-2bfc-4fea-8c96-11b7d418394c 00:02:59.365 17:06:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=9b3233cb-2bfc-4fea-8c96-11b7d418394c 00:02:59.365 17:06:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:59.365 17:06:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:59.365 17:06:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:59.365 17:06:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:59.365 17:06:42 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:59.365 17:06:42 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:59.365 17:06:42 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:59.365 17:06:42 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:59.365 17:06:42 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:59.365 17:06:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.365 17:06:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.365 17:06:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.365 17:06:42 -- paths/export.sh@5 -- # export PATH 00:02:59.365 17:06:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.365 17:06:42 -- nvmf/common.sh@51 -- # : 0 00:02:59.365 17:06:42 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:59.365 17:06:42 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:59.365 17:06:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:59.365 17:06:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:59.365 17:06:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:59.366 17:06:42 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:59.366 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:59.366 17:06:42 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:59.366 17:06:42 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:59.366 17:06:42 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:59.366 17:06:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:59.366 17:06:42 -- spdk/autotest.sh@32 -- # uname -s 00:02:59.366 17:06:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:59.366 17:06:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:59.366 17:06:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:59.366 17:06:42 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:59.366 17:06:42 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:59.366 17:06:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:59.366 17:06:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:59.366 17:06:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:59.366 17:06:42 -- spdk/autotest.sh@48 -- # udevadm_pid=54194 00:02:59.366 17:06:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:59.366 17:06:42 -- pm/common@17 -- # local monitor 00:02:59.366 17:06:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:59.366 17:06:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.366 17:06:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.366 17:06:42 -- pm/common@25 -- # sleep 1 00:02:59.366 17:06:42 -- pm/common@21 -- # date +%s 00:02:59.366 17:06:42 -- pm/common@21 -- # date +%s 00:02:59.366 17:06:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730308002 00:02:59.366 17:06:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730308002 00:02:59.366 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730308002_collect-cpu-load.pm.log 00:02:59.366 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730308002_collect-vmstat.pm.log 00:03:00.301 17:06:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:00.301 17:06:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:00.301 17:06:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:00.301 17:06:43 -- common/autotest_common.sh@10 -- # set +x 00:03:00.301 17:06:43 -- spdk/autotest.sh@59 -- # create_test_list 00:03:00.301 17:06:43 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:00.301 17:06:43 -- common/autotest_common.sh@10 -- # set +x 00:03:00.558 17:06:43 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:00.558 17:06:43 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:00.558 17:06:43 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:00.558 17:06:43 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:00.558 17:06:43 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:00.558 17:06:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:00.558 17:06:43 -- common/autotest_common.sh@1455 -- # uname 00:03:00.558 17:06:43 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:00.558 17:06:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:00.558 17:06:43 -- common/autotest_common.sh@1475 -- # uname 00:03:00.558 17:06:43 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:00.558 17:06:43 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:00.558 17:06:43 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:00.558 lcov: LCOV version 1.15 00:03:00.558 17:06:43 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:12.763 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:12.763 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:27.653 17:07:09 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:27.653 17:07:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:27.653 17:07:09 -- common/autotest_common.sh@10 -- # set +x 00:03:27.653 17:07:09 -- spdk/autotest.sh@78 -- # rm -f 00:03:27.653 17:07:09 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:27.653 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:27.653 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:27.912 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:27.912 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:27.912 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:27.912 17:07:10 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:27.912 17:07:10 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:27.912 17:07:10 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:27.912 17:07:10 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:27.912 17:07:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:27.912 17:07:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:27.912 17:07:10 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:27.912 17:07:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:27.912 17:07:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:27.912 17:07:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:27.912 17:07:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1c1n1 00:03:27.912 17:07:10 -- common/autotest_common.sh@1648 -- # local device=nvme1c1n1 00:03:27.912 17:07:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:03:27.912 17:07:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:27.912 17:07:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:27.912 17:07:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:27.912 17:07:10 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:27.912 17:07:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:27.912 17:07:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:27.912 17:07:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:27.912 17:07:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:03:27.912 17:07:10 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:03:27.912 17:07:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:27.912 17:07:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:27.912 17:07:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:27.912 17:07:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:03:27.912 17:07:10 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:03:27.912 17:07:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:27.912 17:07:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:27.912 17:07:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:27.912 17:07:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n2 00:03:27.912 17:07:10 -- common/autotest_common.sh@1648 -- # local device=nvme3n2 00:03:27.912 17:07:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:03:27.912 17:07:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:27.912 17:07:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:27.912 17:07:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n3 00:03:27.912 17:07:10 -- common/autotest_common.sh@1648 -- # local device=nvme3n3 00:03:27.912 17:07:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:03:27.912 17:07:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:27.912 17:07:10 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:27.912 17:07:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:27.912 17:07:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:27.912 17:07:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:27.912 17:07:10 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:27.912 17:07:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:27.912 No valid GPT data, bailing 00:03:27.912 17:07:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:27.912 17:07:10 -- scripts/common.sh@394 -- # pt= 00:03:27.912 17:07:10 -- scripts/common.sh@395 -- # return 1 00:03:27.912 17:07:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:27.912 1+0 records in 00:03:27.912 1+0 records out 00:03:27.912 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00962437 s, 109 MB/s 00:03:27.912 17:07:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:27.912 17:07:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:27.912 17:07:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:27.912 17:07:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:27.912 17:07:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:27.912 No valid GPT data, bailing 00:03:27.912 17:07:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:27.912 17:07:10 -- scripts/common.sh@394 -- # pt= 00:03:27.912 17:07:10 -- scripts/common.sh@395 -- # return 1 00:03:27.912 17:07:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:27.912 1+0 records in 00:03:27.912 1+0 records out 00:03:27.912 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00280141 s, 374 MB/s 00:03:27.912 17:07:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:27.912 17:07:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:27.912 17:07:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:27.912 17:07:10 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:27.912 17:07:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:27.912 No valid GPT data, bailing 00:03:27.912 17:07:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:27.912 17:07:10 -- scripts/common.sh@394 -- # pt= 00:03:27.912 17:07:10 -- scripts/common.sh@395 -- # return 1 00:03:27.912 17:07:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:27.912 1+0 records in 00:03:27.912 1+0 records out 00:03:27.912 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0046044 s, 228 MB/s 00:03:27.912 17:07:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:27.912 17:07:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:27.912 17:07:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:27.912 17:07:10 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:28.173 17:07:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:28.173 No valid GPT data, bailing 00:03:28.173 17:07:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:28.173 17:07:10 -- scripts/common.sh@394 -- # pt= 00:03:28.173 17:07:10 -- scripts/common.sh@395 -- # return 1 00:03:28.173 17:07:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:28.173 1+0 records in 00:03:28.173 1+0 records out 00:03:28.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00387838 s, 270 MB/s 00:03:28.173 17:07:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:28.173 17:07:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:28.173 17:07:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:03:28.173 17:07:10 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:03:28.173 17:07:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:03:28.173 No valid GPT data, bailing 00:03:28.173 17:07:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:03:28.173 17:07:10 -- scripts/common.sh@394 -- # pt= 00:03:28.173 17:07:10 -- scripts/common.sh@395 -- # return 1 00:03:28.173 17:07:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:03:28.173 1+0 records in 00:03:28.173 1+0 records out 00:03:28.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00413919 s, 253 MB/s 00:03:28.173 17:07:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:28.173 17:07:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:28.173 17:07:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:03:28.173 17:07:11 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:03:28.173 17:07:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:03:28.173 No valid GPT data, bailing 00:03:28.173 17:07:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:03:28.173 17:07:11 -- scripts/common.sh@394 -- # pt= 00:03:28.173 17:07:11 -- scripts/common.sh@395 -- # return 1 00:03:28.173 17:07:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:03:28.173 1+0 records in 00:03:28.173 1+0 records out 00:03:28.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00326132 s, 322 MB/s 00:03:28.173 17:07:11 -- spdk/autotest.sh@105 -- # sync 00:03:28.742 17:07:11 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:28.742 17:07:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:28.742 17:07:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:30.639 17:07:13 -- spdk/autotest.sh@111 -- # uname -s 00:03:30.639 17:07:13 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:30.639 17:07:13 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:30.639 17:07:13 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:30.639 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:31.204 Hugepages 00:03:31.204 node hugesize free / total 00:03:31.204 node0 1048576kB 0 / 0 00:03:31.204 node0 2048kB 0 / 0 00:03:31.204 00:03:31.204 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:31.204 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:31.204 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:31.204 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:03:31.204 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:03:31.461 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:31.461 17:07:14 -- spdk/autotest.sh@117 -- # uname -s 00:03:31.461 17:07:14 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:31.461 17:07:14 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:31.461 17:07:14 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:31.719 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:32.284 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:32.284 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:32.284 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:32.284 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:32.284 17:07:15 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:33.656 17:07:16 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:33.656 17:07:16 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:33.656 17:07:16 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:33.656 17:07:16 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:33.656 17:07:16 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:33.656 17:07:16 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:33.656 17:07:16 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:33.656 17:07:16 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:33.656 17:07:16 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:33.656 17:07:16 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:03:33.656 17:07:16 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:33.656 17:07:16 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:33.656 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:33.915 Waiting for block devices as requested 00:03:33.915 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:33.915 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:33.915 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:03:33.915 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:03:39.225 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:03:39.225 17:07:21 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:39.225 17:07:21 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:39.225 17:07:21 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:03:39.225 17:07:21 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:39.225 17:07:21 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:39.225 17:07:21 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:39.225 17:07:21 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:39.225 17:07:21 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:03:39.225 17:07:21 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:03:39.225 17:07:21 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:03:39.225 17:07:21 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:03:39.225 17:07:21 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:39.225 17:07:21 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:39.225 17:07:21 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:39.225 17:07:21 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:39.225 17:07:21 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:39.225 17:07:21 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:39.225 17:07:21 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:39.225 17:07:21 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:03:39.225 17:07:21 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:39.225 17:07:21 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:39.225 17:07:21 -- common/autotest_common.sh@1541 -- # continue 00:03:39.225 17:07:21 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:39.225 17:07:21 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:39.225 17:07:21 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:03:39.225 17:07:21 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:39.225 17:07:21 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:39.225 17:07:22 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:39.225 17:07:22 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:39.225 17:07:22 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:39.225 17:07:22 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:39.225 17:07:22 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:39.225 17:07:22 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:39.225 17:07:22 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:39.225 17:07:22 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:39.225 17:07:22 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:39.225 17:07:22 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:39.225 17:07:22 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:39.225 17:07:22 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:39.226 17:07:22 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:39.226 17:07:22 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:39.226 17:07:22 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:39.226 17:07:22 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:39.226 17:07:22 -- common/autotest_common.sh@1541 -- # continue 00:03:39.226 17:07:22 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:39.226 17:07:22 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:03:39.226 17:07:22 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:39.226 17:07:22 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:03:39.226 17:07:22 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:39.226 17:07:22 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:03:39.226 17:07:22 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:39.226 17:07:22 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:03:39.226 17:07:22 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:03:39.226 17:07:22 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:03:39.226 17:07:22 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:39.226 17:07:22 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:03:39.226 17:07:22 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:39.226 17:07:22 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:39.226 17:07:22 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:39.226 17:07:22 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:39.226 17:07:22 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:39.226 17:07:22 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:39.226 17:07:22 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:03:39.226 17:07:22 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:39.226 17:07:22 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:39.226 17:07:22 -- common/autotest_common.sh@1541 -- # continue 00:03:39.226 17:07:22 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:39.226 17:07:22 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:03:39.226 17:07:22 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:03:39.226 17:07:22 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:39.226 17:07:22 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:39.226 17:07:22 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:03:39.226 17:07:22 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:39.226 17:07:22 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:03:39.226 17:07:22 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:03:39.226 17:07:22 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:03:39.226 17:07:22 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:03:39.226 17:07:22 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:39.226 17:07:22 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:39.226 17:07:22 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:39.226 17:07:22 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:39.226 17:07:22 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:39.226 17:07:22 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:03:39.226 17:07:22 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:39.226 17:07:22 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:39.226 17:07:22 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:39.226 17:07:22 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:39.226 17:07:22 -- common/autotest_common.sh@1541 -- # continue 00:03:39.226 17:07:22 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:39.226 17:07:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:39.226 17:07:22 -- common/autotest_common.sh@10 -- # set +x 00:03:39.226 17:07:22 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:39.226 17:07:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:39.226 17:07:22 -- common/autotest_common.sh@10 -- # set +x 00:03:39.226 17:07:22 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:39.796 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:40.362 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:40.362 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:40.362 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:40.362 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:40.362 17:07:23 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:40.362 17:07:23 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:40.362 17:07:23 -- common/autotest_common.sh@10 -- # set +x 00:03:40.362 17:07:23 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:40.362 17:07:23 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:40.362 17:07:23 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:40.362 17:07:23 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:40.362 17:07:23 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:40.362 17:07:23 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:40.362 17:07:23 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:40.362 17:07:23 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:40.362 17:07:23 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:40.362 17:07:23 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:40.362 17:07:23 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:40.362 17:07:23 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:40.362 17:07:23 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:40.362 17:07:23 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:03:40.362 17:07:23 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:40.362 17:07:23 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:40.362 17:07:23 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:40.362 17:07:23 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:40.362 17:07:23 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:40.362 17:07:23 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:40.621 17:07:23 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:40.621 17:07:23 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:40.621 17:07:23 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:40.621 17:07:23 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:40.621 17:07:23 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:03:40.621 17:07:23 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:40.621 17:07:23 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:40.621 17:07:23 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:40.621 17:07:23 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:03:40.621 17:07:23 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:40.621 17:07:23 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:40.621 17:07:23 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:40.621 17:07:23 -- common/autotest_common.sh@1570 -- # return 0 00:03:40.621 17:07:23 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:40.621 17:07:23 -- common/autotest_common.sh@1578 -- # return 0 00:03:40.621 17:07:23 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:40.621 17:07:23 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:40.621 17:07:23 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:40.621 17:07:23 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:40.621 17:07:23 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:40.621 17:07:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:40.621 17:07:23 -- common/autotest_common.sh@10 -- # set +x 00:03:40.621 17:07:23 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:40.621 17:07:23 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:40.621 17:07:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:40.621 17:07:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:40.621 17:07:23 -- common/autotest_common.sh@10 -- # set +x 00:03:40.621 ************************************ 00:03:40.621 START TEST env 00:03:40.621 ************************************ 00:03:40.621 17:07:23 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:40.621 * Looking for test storage... 00:03:40.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:40.621 17:07:23 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:40.621 17:07:23 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:40.621 17:07:23 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:40.621 17:07:23 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:40.621 17:07:23 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:40.621 17:07:23 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:40.621 17:07:23 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:40.621 17:07:23 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:40.621 17:07:23 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:40.621 17:07:23 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:40.621 17:07:23 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:40.621 17:07:23 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:40.621 17:07:23 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:40.621 17:07:23 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:40.621 17:07:23 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:40.621 17:07:23 env -- scripts/common.sh@344 -- # case "$op" in 00:03:40.621 17:07:23 env -- scripts/common.sh@345 -- # : 1 00:03:40.621 17:07:23 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:40.621 17:07:23 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:40.621 17:07:23 env -- scripts/common.sh@365 -- # decimal 1 00:03:40.621 17:07:23 env -- scripts/common.sh@353 -- # local d=1 00:03:40.621 17:07:23 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:40.621 17:07:23 env -- scripts/common.sh@355 -- # echo 1 00:03:40.621 17:07:23 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:40.621 17:07:23 env -- scripts/common.sh@366 -- # decimal 2 00:03:40.621 17:07:23 env -- scripts/common.sh@353 -- # local d=2 00:03:40.621 17:07:23 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:40.621 17:07:23 env -- scripts/common.sh@355 -- # echo 2 00:03:40.621 17:07:23 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:40.621 17:07:23 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:40.621 17:07:23 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:40.621 17:07:23 env -- scripts/common.sh@368 -- # return 0 00:03:40.621 17:07:23 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:40.621 17:07:23 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:40.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.621 --rc genhtml_branch_coverage=1 00:03:40.621 --rc genhtml_function_coverage=1 00:03:40.621 --rc genhtml_legend=1 00:03:40.621 --rc geninfo_all_blocks=1 00:03:40.621 --rc geninfo_unexecuted_blocks=1 00:03:40.621 00:03:40.621 ' 00:03:40.621 17:07:23 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:40.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.621 --rc genhtml_branch_coverage=1 00:03:40.621 --rc genhtml_function_coverage=1 00:03:40.621 --rc genhtml_legend=1 00:03:40.621 --rc geninfo_all_blocks=1 00:03:40.622 --rc geninfo_unexecuted_blocks=1 00:03:40.622 00:03:40.622 ' 00:03:40.622 17:07:23 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:40.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.622 --rc genhtml_branch_coverage=1 00:03:40.622 --rc genhtml_function_coverage=1 00:03:40.622 --rc genhtml_legend=1 00:03:40.622 --rc geninfo_all_blocks=1 00:03:40.622 --rc geninfo_unexecuted_blocks=1 00:03:40.622 00:03:40.622 ' 00:03:40.622 17:07:23 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:40.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.622 --rc genhtml_branch_coverage=1 00:03:40.622 --rc genhtml_function_coverage=1 00:03:40.622 --rc genhtml_legend=1 00:03:40.622 --rc geninfo_all_blocks=1 00:03:40.622 --rc geninfo_unexecuted_blocks=1 00:03:40.622 00:03:40.622 ' 00:03:40.622 17:07:23 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:40.622 17:07:23 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:40.622 17:07:23 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:40.622 17:07:23 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.622 ************************************ 00:03:40.622 START TEST env_memory 00:03:40.622 ************************************ 00:03:40.622 17:07:23 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:40.622 00:03:40.622 00:03:40.622 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.622 http://cunit.sourceforge.net/ 00:03:40.622 00:03:40.622 00:03:40.622 Suite: memory 00:03:40.880 Test: alloc and free memory map ...[2024-10-30 17:07:23.604684] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:40.880 passed 00:03:40.880 Test: mem map translation ...[2024-10-30 17:07:23.678414] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:40.880 [2024-10-30 17:07:23.678530] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:40.880 [2024-10-30 17:07:23.678618] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:40.880 [2024-10-30 17:07:23.678643] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:40.880 passed 00:03:40.880 Test: mem map registration ...[2024-10-30 17:07:23.747533] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:40.880 [2024-10-30 17:07:23.747595] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:40.880 passed 00:03:40.880 Test: mem map adjacent registrations ...passed 00:03:40.880 00:03:40.880 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.880 suites 1 1 n/a 0 0 00:03:40.880 tests 4 4 4 0 0 00:03:40.880 asserts 152 152 152 0 n/a 00:03:40.880 00:03:40.880 Elapsed time = 0.290 seconds 00:03:40.880 00:03:40.880 real 0m0.324s 00:03:40.880 user 0m0.294s 00:03:40.880 sys 0m0.023s 00:03:40.880 17:07:23 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:40.880 17:07:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:40.880 ************************************ 00:03:40.880 END TEST env_memory 00:03:40.880 ************************************ 00:03:41.139 17:07:23 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:41.139 17:07:23 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:41.139 17:07:23 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:41.139 17:07:23 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.139 ************************************ 00:03:41.139 START TEST env_vtophys 00:03:41.139 ************************************ 00:03:41.139 17:07:23 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:41.139 EAL: lib.eal log level changed from notice to debug 00:03:41.139 EAL: Detected lcore 0 as core 0 on socket 0 00:03:41.139 EAL: Detected lcore 1 as core 0 on socket 0 00:03:41.139 EAL: Detected lcore 2 as core 0 on socket 0 00:03:41.139 EAL: Detected lcore 3 as core 0 on socket 0 00:03:41.139 EAL: Detected lcore 4 as core 0 on socket 0 00:03:41.140 EAL: Detected lcore 5 as core 0 on socket 0 00:03:41.140 EAL: Detected lcore 6 as core 0 on socket 0 00:03:41.140 EAL: Detected lcore 7 as core 0 on socket 0 00:03:41.140 EAL: Detected lcore 8 as core 0 on socket 0 00:03:41.140 EAL: Detected lcore 9 as core 0 on socket 0 00:03:41.140 EAL: Maximum logical cores by configuration: 128 00:03:41.140 EAL: Detected CPU lcores: 10 00:03:41.140 EAL: Detected NUMA nodes: 1 00:03:41.140 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:41.140 EAL: Detected shared linkage of DPDK 00:03:41.140 EAL: No shared files mode enabled, IPC will be disabled 00:03:41.140 EAL: Selected IOVA mode 'PA' 00:03:41.140 EAL: Probing VFIO support... 00:03:41.140 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:41.140 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:41.140 EAL: Ask a virtual area of 0x2e000 bytes 00:03:41.140 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:41.140 EAL: Setting up physically contiguous memory... 00:03:41.140 EAL: Setting maximum number of open files to 524288 00:03:41.140 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:41.140 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:41.140 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.140 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:41.140 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:41.140 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.140 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:41.140 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:41.140 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.140 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:41.140 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:41.140 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.140 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:41.140 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:41.140 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.140 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:41.140 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:41.140 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.140 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:41.140 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:41.140 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.140 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:41.140 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:41.140 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.140 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:41.140 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:41.140 EAL: Hugepages will be freed exactly as allocated. 00:03:41.140 EAL: No shared files mode enabled, IPC is disabled 00:03:41.140 EAL: No shared files mode enabled, IPC is disabled 00:03:41.140 EAL: TSC frequency is ~2600000 KHz 00:03:41.140 EAL: Main lcore 0 is ready (tid=7f132881ba40;cpuset=[0]) 00:03:41.140 EAL: Trying to obtain current memory policy. 00:03:41.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.140 EAL: Restoring previous memory policy: 0 00:03:41.140 EAL: request: mp_malloc_sync 00:03:41.140 EAL: No shared files mode enabled, IPC is disabled 00:03:41.140 EAL: Heap on socket 0 was expanded by 2MB 00:03:41.140 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:41.140 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:41.140 EAL: Mem event callback 'spdk:(nil)' registered 00:03:41.140 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:41.140 00:03:41.140 00:03:41.140 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.140 http://cunit.sourceforge.net/ 00:03:41.140 00:03:41.140 00:03:41.140 Suite: components_suite 00:03:41.712 Test: vtophys_malloc_test ...passed 00:03:41.712 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:41.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.712 EAL: Restoring previous memory policy: 4 00:03:41.712 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.712 EAL: request: mp_malloc_sync 00:03:41.712 EAL: No shared files mode enabled, IPC is disabled 00:03:41.712 EAL: Heap on socket 0 was expanded by 4MB 00:03:41.712 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.712 EAL: request: mp_malloc_sync 00:03:41.712 EAL: No shared files mode enabled, IPC is disabled 00:03:41.712 EAL: Heap on socket 0 was shrunk by 4MB 00:03:41.712 EAL: Trying to obtain current memory policy. 00:03:41.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.712 EAL: Restoring previous memory policy: 4 00:03:41.712 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.712 EAL: request: mp_malloc_sync 00:03:41.712 EAL: No shared files mode enabled, IPC is disabled 00:03:41.712 EAL: Heap on socket 0 was expanded by 6MB 00:03:41.712 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.712 EAL: request: mp_malloc_sync 00:03:41.712 EAL: No shared files mode enabled, IPC is disabled 00:03:41.712 EAL: Heap on socket 0 was shrunk by 6MB 00:03:41.712 EAL: Trying to obtain current memory policy. 00:03:41.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.712 EAL: Restoring previous memory policy: 4 00:03:41.712 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.712 EAL: request: mp_malloc_sync 00:03:41.712 EAL: No shared files mode enabled, IPC is disabled 00:03:41.712 EAL: Heap on socket 0 was expanded by 10MB 00:03:41.712 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.712 EAL: request: mp_malloc_sync 00:03:41.712 EAL: No shared files mode enabled, IPC is disabled 00:03:41.712 EAL: Heap on socket 0 was shrunk by 10MB 00:03:41.712 EAL: Trying to obtain current memory policy. 00:03:41.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.712 EAL: Restoring previous memory policy: 4 00:03:41.712 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.712 EAL: request: mp_malloc_sync 00:03:41.712 EAL: No shared files mode enabled, IPC is disabled 00:03:41.712 EAL: Heap on socket 0 was expanded by 18MB 00:03:41.712 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.712 EAL: request: mp_malloc_sync 00:03:41.712 EAL: No shared files mode enabled, IPC is disabled 00:03:41.712 EAL: Heap on socket 0 was shrunk by 18MB 00:03:41.712 EAL: Trying to obtain current memory policy. 00:03:41.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.712 EAL: Restoring previous memory policy: 4 00:03:41.712 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.712 EAL: request: mp_malloc_sync 00:03:41.712 EAL: No shared files mode enabled, IPC is disabled 00:03:41.712 EAL: Heap on socket 0 was expanded by 34MB 00:03:41.712 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.712 EAL: request: mp_malloc_sync 00:03:41.712 EAL: No shared files mode enabled, IPC is disabled 00:03:41.712 EAL: Heap on socket 0 was shrunk by 34MB 00:03:41.712 EAL: Trying to obtain current memory policy. 00:03:41.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.713 EAL: Restoring previous memory policy: 4 00:03:41.713 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.713 EAL: request: mp_malloc_sync 00:03:41.713 EAL: No shared files mode enabled, IPC is disabled 00:03:41.713 EAL: Heap on socket 0 was expanded by 66MB 00:03:41.713 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.713 EAL: request: mp_malloc_sync 00:03:41.713 EAL: No shared files mode enabled, IPC is disabled 00:03:41.713 EAL: Heap on socket 0 was shrunk by 66MB 00:03:41.713 EAL: Trying to obtain current memory policy. 00:03:41.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.975 EAL: Restoring previous memory policy: 4 00:03:41.975 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.975 EAL: request: mp_malloc_sync 00:03:41.975 EAL: No shared files mode enabled, IPC is disabled 00:03:41.975 EAL: Heap on socket 0 was expanded by 130MB 00:03:41.975 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.975 EAL: request: mp_malloc_sync 00:03:41.975 EAL: No shared files mode enabled, IPC is disabled 00:03:41.975 EAL: Heap on socket 0 was shrunk by 130MB 00:03:42.236 EAL: Trying to obtain current memory policy. 00:03:42.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.236 EAL: Restoring previous memory policy: 4 00:03:42.236 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.236 EAL: request: mp_malloc_sync 00:03:42.236 EAL: No shared files mode enabled, IPC is disabled 00:03:42.236 EAL: Heap on socket 0 was expanded by 258MB 00:03:42.497 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.497 EAL: request: mp_malloc_sync 00:03:42.497 EAL: No shared files mode enabled, IPC is disabled 00:03:42.497 EAL: Heap on socket 0 was shrunk by 258MB 00:03:42.756 EAL: Trying to obtain current memory policy. 00:03:42.756 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.756 EAL: Restoring previous memory policy: 4 00:03:42.756 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.756 EAL: request: mp_malloc_sync 00:03:42.756 EAL: No shared files mode enabled, IPC is disabled 00:03:42.756 EAL: Heap on socket 0 was expanded by 514MB 00:03:43.364 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.364 EAL: request: mp_malloc_sync 00:03:43.364 EAL: No shared files mode enabled, IPC is disabled 00:03:43.364 EAL: Heap on socket 0 was shrunk by 514MB 00:03:43.932 EAL: Trying to obtain current memory policy. 00:03:43.932 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.191 EAL: Restoring previous memory policy: 4 00:03:44.191 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.191 EAL: request: mp_malloc_sync 00:03:44.191 EAL: No shared files mode enabled, IPC is disabled 00:03:44.191 EAL: Heap on socket 0 was expanded by 1026MB 00:03:45.128 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.128 EAL: request: mp_malloc_sync 00:03:45.128 EAL: No shared files mode enabled, IPC is disabled 00:03:45.128 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:46.058 passed 00:03:46.058 00:03:46.058 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.058 suites 1 1 n/a 0 0 00:03:46.058 tests 2 2 2 0 0 00:03:46.058 asserts 5838 5838 5838 0 n/a 00:03:46.058 00:03:46.058 Elapsed time = 4.567 seconds 00:03:46.058 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.058 EAL: request: mp_malloc_sync 00:03:46.058 EAL: No shared files mode enabled, IPC is disabled 00:03:46.058 EAL: Heap on socket 0 was shrunk by 2MB 00:03:46.058 EAL: No shared files mode enabled, IPC is disabled 00:03:46.058 EAL: No shared files mode enabled, IPC is disabled 00:03:46.058 EAL: No shared files mode enabled, IPC is disabled 00:03:46.058 ************************************ 00:03:46.058 END TEST env_vtophys 00:03:46.058 ************************************ 00:03:46.058 00:03:46.058 real 0m4.829s 00:03:46.058 user 0m4.057s 00:03:46.058 sys 0m0.627s 00:03:46.058 17:07:28 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:46.058 17:07:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:46.058 17:07:28 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:46.058 17:07:28 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:46.058 17:07:28 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:46.058 17:07:28 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.058 ************************************ 00:03:46.058 START TEST env_pci 00:03:46.058 ************************************ 00:03:46.058 17:07:28 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:46.058 00:03:46.058 00:03:46.058 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.058 http://cunit.sourceforge.net/ 00:03:46.058 00:03:46.058 00:03:46.058 Suite: pci 00:03:46.058 Test: pci_hook ...[2024-10-30 17:07:28.787097] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56939 has claimed it 00:03:46.058 EAL: Cannot find device (10000:00:01.0) 00:03:46.058 EAL: Failed to attach device on primary process 00:03:46.058 passed 00:03:46.058 00:03:46.058 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.058 suites 1 1 n/a 0 0 00:03:46.058 tests 1 1 1 0 0 00:03:46.058 asserts 25 25 25 0 n/a 00:03:46.058 00:03:46.058 Elapsed time = 0.004 seconds 00:03:46.058 00:03:46.058 real 0m0.060s 00:03:46.058 user 0m0.026s 00:03:46.058 sys 0m0.032s 00:03:46.058 ************************************ 00:03:46.058 END TEST env_pci 00:03:46.058 ************************************ 00:03:46.058 17:07:28 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:46.058 17:07:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:46.058 17:07:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:46.058 17:07:28 env -- env/env.sh@15 -- # uname 00:03:46.058 17:07:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:46.058 17:07:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:46.058 17:07:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:46.058 17:07:28 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:03:46.059 17:07:28 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:46.059 17:07:28 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.059 ************************************ 00:03:46.059 START TEST env_dpdk_post_init 00:03:46.059 ************************************ 00:03:46.059 17:07:28 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:46.059 EAL: Detected CPU lcores: 10 00:03:46.059 EAL: Detected NUMA nodes: 1 00:03:46.059 EAL: Detected shared linkage of DPDK 00:03:46.059 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:46.059 EAL: Selected IOVA mode 'PA' 00:03:46.059 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:46.316 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:46.316 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:03:46.316 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:03:46.316 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:03:46.316 Starting DPDK initialization... 00:03:46.316 Starting SPDK post initialization... 00:03:46.316 SPDK NVMe probe 00:03:46.316 Attaching to 0000:00:10.0 00:03:46.316 Attaching to 0000:00:11.0 00:03:46.316 Attaching to 0000:00:12.0 00:03:46.316 Attaching to 0000:00:13.0 00:03:46.316 Attached to 0000:00:10.0 00:03:46.316 Attached to 0000:00:11.0 00:03:46.316 Attached to 0000:00:13.0 00:03:46.316 Attached to 0000:00:12.0 00:03:46.316 Cleaning up... 00:03:46.316 ************************************ 00:03:46.316 00:03:46.316 real 0m0.244s 00:03:46.316 user 0m0.079s 00:03:46.316 sys 0m0.067s 00:03:46.316 17:07:29 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:46.316 17:07:29 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:46.316 END TEST env_dpdk_post_init 00:03:46.316 ************************************ 00:03:46.316 17:07:29 env -- env/env.sh@26 -- # uname 00:03:46.316 17:07:29 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:46.316 17:07:29 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:46.316 17:07:29 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:46.316 17:07:29 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:46.316 17:07:29 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.316 ************************************ 00:03:46.316 START TEST env_mem_callbacks 00:03:46.316 ************************************ 00:03:46.316 17:07:29 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:46.316 EAL: Detected CPU lcores: 10 00:03:46.316 EAL: Detected NUMA nodes: 1 00:03:46.316 EAL: Detected shared linkage of DPDK 00:03:46.316 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:46.316 EAL: Selected IOVA mode 'PA' 00:03:46.574 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:46.574 00:03:46.574 00:03:46.574 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.574 http://cunit.sourceforge.net/ 00:03:46.574 00:03:46.574 00:03:46.574 Suite: memory 00:03:46.574 Test: test ... 00:03:46.574 register 0x200000200000 2097152 00:03:46.574 malloc 3145728 00:03:46.574 register 0x200000400000 4194304 00:03:46.574 buf 0x2000004fffc0 len 3145728 PASSED 00:03:46.574 malloc 64 00:03:46.574 buf 0x2000004ffec0 len 64 PASSED 00:03:46.574 malloc 4194304 00:03:46.574 register 0x200000800000 6291456 00:03:46.574 buf 0x2000009fffc0 len 4194304 PASSED 00:03:46.574 free 0x2000004fffc0 3145728 00:03:46.574 free 0x2000004ffec0 64 00:03:46.574 unregister 0x200000400000 4194304 PASSED 00:03:46.574 free 0x2000009fffc0 4194304 00:03:46.574 unregister 0x200000800000 6291456 PASSED 00:03:46.574 malloc 8388608 00:03:46.574 register 0x200000400000 10485760 00:03:46.574 buf 0x2000005fffc0 len 8388608 PASSED 00:03:46.574 free 0x2000005fffc0 8388608 00:03:46.574 unregister 0x200000400000 10485760 PASSED 00:03:46.574 passed 00:03:46.574 00:03:46.574 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.574 suites 1 1 n/a 0 0 00:03:46.574 tests 1 1 1 0 0 00:03:46.574 asserts 15 15 15 0 n/a 00:03:46.574 00:03:46.574 Elapsed time = 0.047 seconds 00:03:46.574 00:03:46.574 real 0m0.217s 00:03:46.574 user 0m0.056s 00:03:46.574 sys 0m0.058s 00:03:46.574 17:07:29 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:46.574 17:07:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:46.574 ************************************ 00:03:46.574 END TEST env_mem_callbacks 00:03:46.574 ************************************ 00:03:46.574 00:03:46.574 real 0m6.028s 00:03:46.574 user 0m4.676s 00:03:46.574 sys 0m1.001s 00:03:46.574 17:07:29 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:46.574 ************************************ 00:03:46.574 END TEST env 00:03:46.574 ************************************ 00:03:46.574 17:07:29 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.574 17:07:29 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:46.574 17:07:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:46.574 17:07:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:46.574 17:07:29 -- common/autotest_common.sh@10 -- # set +x 00:03:46.574 ************************************ 00:03:46.574 START TEST rpc 00:03:46.574 ************************************ 00:03:46.574 17:07:29 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:46.574 * Looking for test storage... 00:03:46.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:46.574 17:07:29 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:46.574 17:07:29 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:46.574 17:07:29 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:46.832 17:07:29 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:46.832 17:07:29 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:46.832 17:07:29 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:46.832 17:07:29 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:46.832 17:07:29 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:46.832 17:07:29 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:46.832 17:07:29 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:46.832 17:07:29 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:46.832 17:07:29 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:46.832 17:07:29 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:46.832 17:07:29 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:46.832 17:07:29 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:46.832 17:07:29 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:46.832 17:07:29 rpc -- scripts/common.sh@345 -- # : 1 00:03:46.832 17:07:29 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:46.832 17:07:29 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:46.832 17:07:29 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:46.832 17:07:29 rpc -- scripts/common.sh@353 -- # local d=1 00:03:46.832 17:07:29 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:46.832 17:07:29 rpc -- scripts/common.sh@355 -- # echo 1 00:03:46.832 17:07:29 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:46.832 17:07:29 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:46.832 17:07:29 rpc -- scripts/common.sh@353 -- # local d=2 00:03:46.832 17:07:29 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:46.832 17:07:29 rpc -- scripts/common.sh@355 -- # echo 2 00:03:46.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:46.832 17:07:29 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:46.832 17:07:29 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:46.832 17:07:29 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:46.832 17:07:29 rpc -- scripts/common.sh@368 -- # return 0 00:03:46.832 17:07:29 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:46.832 17:07:29 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:46.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.832 --rc genhtml_branch_coverage=1 00:03:46.832 --rc genhtml_function_coverage=1 00:03:46.832 --rc genhtml_legend=1 00:03:46.832 --rc geninfo_all_blocks=1 00:03:46.832 --rc geninfo_unexecuted_blocks=1 00:03:46.832 00:03:46.832 ' 00:03:46.832 17:07:29 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:46.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.832 --rc genhtml_branch_coverage=1 00:03:46.832 --rc genhtml_function_coverage=1 00:03:46.832 --rc genhtml_legend=1 00:03:46.832 --rc geninfo_all_blocks=1 00:03:46.832 --rc geninfo_unexecuted_blocks=1 00:03:46.832 00:03:46.832 ' 00:03:46.832 17:07:29 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:46.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.832 --rc genhtml_branch_coverage=1 00:03:46.832 --rc genhtml_function_coverage=1 00:03:46.832 --rc genhtml_legend=1 00:03:46.832 --rc geninfo_all_blocks=1 00:03:46.832 --rc geninfo_unexecuted_blocks=1 00:03:46.832 00:03:46.832 ' 00:03:46.832 17:07:29 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:46.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.832 --rc genhtml_branch_coverage=1 00:03:46.832 --rc genhtml_function_coverage=1 00:03:46.832 --rc genhtml_legend=1 00:03:46.832 --rc geninfo_all_blocks=1 00:03:46.832 --rc geninfo_unexecuted_blocks=1 00:03:46.832 00:03:46.832 ' 00:03:46.832 17:07:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57066 00:03:46.832 17:07:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:46.832 17:07:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57066 00:03:46.832 17:07:29 rpc -- common/autotest_common.sh@833 -- # '[' -z 57066 ']' 00:03:46.832 17:07:29 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:46.832 17:07:29 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:46.832 17:07:29 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:46.832 17:07:29 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:46.832 17:07:29 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:46.832 17:07:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.832 [2024-10-30 17:07:29.639715] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:03:46.832 [2024-10-30 17:07:29.639976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57066 ] 00:03:46.832 [2024-10-30 17:07:29.795303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.089 [2024-10-30 17:07:29.891427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:47.089 [2024-10-30 17:07:29.891486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57066' to capture a snapshot of events at runtime. 00:03:47.089 [2024-10-30 17:07:29.891496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:47.089 [2024-10-30 17:07:29.891505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:47.089 [2024-10-30 17:07:29.891513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57066 for offline analysis/debug. 00:03:47.089 [2024-10-30 17:07:29.892359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.654 17:07:30 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:47.654 17:07:30 rpc -- common/autotest_common.sh@866 -- # return 0 00:03:47.654 17:07:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:47.654 17:07:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:47.654 17:07:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:47.654 17:07:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:47.654 17:07:30 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:47.654 17:07:30 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:47.654 17:07:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.654 ************************************ 00:03:47.654 START TEST rpc_integrity 00:03:47.654 ************************************ 00:03:47.654 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:47.654 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:47.654 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.654 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.654 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.654 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:47.654 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:47.654 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:47.654 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:47.654 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.654 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.654 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.654 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:47.654 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:47.654 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.654 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.654 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.654 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:47.654 { 00:03:47.654 "name": "Malloc0", 00:03:47.654 "aliases": [ 00:03:47.654 "de4fc1e6-1048-4f3e-8dd6-c9dad0c75eb7" 00:03:47.654 ], 00:03:47.654 "product_name": "Malloc disk", 00:03:47.654 "block_size": 512, 00:03:47.654 "num_blocks": 16384, 00:03:47.654 "uuid": "de4fc1e6-1048-4f3e-8dd6-c9dad0c75eb7", 00:03:47.654 "assigned_rate_limits": { 00:03:47.654 "rw_ios_per_sec": 0, 00:03:47.654 "rw_mbytes_per_sec": 0, 00:03:47.655 "r_mbytes_per_sec": 0, 00:03:47.655 "w_mbytes_per_sec": 0 00:03:47.655 }, 00:03:47.655 "claimed": false, 00:03:47.655 "zoned": false, 00:03:47.655 "supported_io_types": { 00:03:47.655 "read": true, 00:03:47.655 "write": true, 00:03:47.655 "unmap": true, 00:03:47.655 "flush": true, 00:03:47.655 "reset": true, 00:03:47.655 "nvme_admin": false, 00:03:47.655 "nvme_io": false, 00:03:47.655 "nvme_io_md": false, 00:03:47.655 "write_zeroes": true, 00:03:47.655 "zcopy": true, 00:03:47.655 "get_zone_info": false, 00:03:47.655 "zone_management": false, 00:03:47.655 "zone_append": false, 00:03:47.655 "compare": false, 00:03:47.655 "compare_and_write": false, 00:03:47.655 "abort": true, 00:03:47.655 "seek_hole": false, 00:03:47.655 "seek_data": false, 00:03:47.655 "copy": true, 00:03:47.655 "nvme_iov_md": false 00:03:47.655 }, 00:03:47.655 "memory_domains": [ 00:03:47.655 { 00:03:47.655 "dma_device_id": "system", 00:03:47.655 "dma_device_type": 1 00:03:47.655 }, 00:03:47.655 { 00:03:47.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.655 "dma_device_type": 2 00:03:47.655 } 00:03:47.655 ], 00:03:47.655 "driver_specific": {} 00:03:47.655 } 00:03:47.655 ]' 00:03:47.655 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:47.655 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:47.655 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:47.655 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.655 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.655 [2024-10-30 17:07:30.593101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:47.655 [2024-10-30 17:07:30.593162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:47.655 [2024-10-30 17:07:30.593190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:03:47.655 [2024-10-30 17:07:30.593215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:47.655 [2024-10-30 17:07:30.595437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:47.655 [2024-10-30 17:07:30.595478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:47.655 Passthru0 00:03:47.655 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.655 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:47.655 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.655 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.655 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.655 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:47.655 { 00:03:47.655 "name": "Malloc0", 00:03:47.655 "aliases": [ 00:03:47.655 "de4fc1e6-1048-4f3e-8dd6-c9dad0c75eb7" 00:03:47.655 ], 00:03:47.655 "product_name": "Malloc disk", 00:03:47.655 "block_size": 512, 00:03:47.655 "num_blocks": 16384, 00:03:47.655 "uuid": "de4fc1e6-1048-4f3e-8dd6-c9dad0c75eb7", 00:03:47.655 "assigned_rate_limits": { 00:03:47.655 "rw_ios_per_sec": 0, 00:03:47.655 "rw_mbytes_per_sec": 0, 00:03:47.655 "r_mbytes_per_sec": 0, 00:03:47.655 "w_mbytes_per_sec": 0 00:03:47.655 }, 00:03:47.655 "claimed": true, 00:03:47.655 "claim_type": "exclusive_write", 00:03:47.655 "zoned": false, 00:03:47.655 "supported_io_types": { 00:03:47.655 "read": true, 00:03:47.655 "write": true, 00:03:47.655 "unmap": true, 00:03:47.655 "flush": true, 00:03:47.655 "reset": true, 00:03:47.655 "nvme_admin": false, 00:03:47.655 "nvme_io": false, 00:03:47.655 "nvme_io_md": false, 00:03:47.655 "write_zeroes": true, 00:03:47.655 "zcopy": true, 00:03:47.655 "get_zone_info": false, 00:03:47.655 "zone_management": false, 00:03:47.655 "zone_append": false, 00:03:47.655 "compare": false, 00:03:47.655 "compare_and_write": false, 00:03:47.655 "abort": true, 00:03:47.655 "seek_hole": false, 00:03:47.655 "seek_data": false, 00:03:47.655 "copy": true, 00:03:47.655 "nvme_iov_md": false 00:03:47.655 }, 00:03:47.655 "memory_domains": [ 00:03:47.655 { 00:03:47.655 "dma_device_id": "system", 00:03:47.655 "dma_device_type": 1 00:03:47.655 }, 00:03:47.655 { 00:03:47.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.655 "dma_device_type": 2 00:03:47.655 } 00:03:47.655 ], 00:03:47.655 "driver_specific": {} 00:03:47.655 }, 00:03:47.655 { 00:03:47.655 "name": "Passthru0", 00:03:47.655 "aliases": [ 00:03:47.655 "70a9e209-5a2e-55bd-b365-a353ce312908" 00:03:47.655 ], 00:03:47.655 "product_name": "passthru", 00:03:47.655 "block_size": 512, 00:03:47.655 "num_blocks": 16384, 00:03:47.655 "uuid": "70a9e209-5a2e-55bd-b365-a353ce312908", 00:03:47.655 "assigned_rate_limits": { 00:03:47.655 "rw_ios_per_sec": 0, 00:03:47.655 "rw_mbytes_per_sec": 0, 00:03:47.655 "r_mbytes_per_sec": 0, 00:03:47.655 "w_mbytes_per_sec": 0 00:03:47.655 }, 00:03:47.655 "claimed": false, 00:03:47.655 "zoned": false, 00:03:47.655 "supported_io_types": { 00:03:47.655 "read": true, 00:03:47.655 "write": true, 00:03:47.655 "unmap": true, 00:03:47.655 "flush": true, 00:03:47.655 "reset": true, 00:03:47.655 "nvme_admin": false, 00:03:47.655 "nvme_io": false, 00:03:47.655 "nvme_io_md": false, 00:03:47.655 "write_zeroes": true, 00:03:47.655 "zcopy": true, 00:03:47.655 "get_zone_info": false, 00:03:47.655 "zone_management": false, 00:03:47.655 "zone_append": false, 00:03:47.655 "compare": false, 00:03:47.655 "compare_and_write": false, 00:03:47.655 "abort": true, 00:03:47.655 "seek_hole": false, 00:03:47.655 "seek_data": false, 00:03:47.655 "copy": true, 00:03:47.655 "nvme_iov_md": false 00:03:47.655 }, 00:03:47.655 "memory_domains": [ 00:03:47.655 { 00:03:47.655 "dma_device_id": "system", 00:03:47.655 "dma_device_type": 1 00:03:47.655 }, 00:03:47.655 { 00:03:47.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.655 "dma_device_type": 2 00:03:47.655 } 00:03:47.655 ], 00:03:47.655 "driver_specific": { 00:03:47.655 "passthru": { 00:03:47.655 "name": "Passthru0", 00:03:47.655 "base_bdev_name": "Malloc0" 00:03:47.655 } 00:03:47.655 } 00:03:47.655 } 00:03:47.655 ]' 00:03:47.655 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:47.913 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:47.913 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:47.913 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.913 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.913 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.913 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:47.913 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.913 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.913 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.913 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:47.913 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.913 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.913 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.913 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:47.913 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:47.913 ************************************ 00:03:47.913 END TEST rpc_integrity 00:03:47.913 ************************************ 00:03:47.913 17:07:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:47.913 00:03:47.913 real 0m0.238s 00:03:47.913 user 0m0.121s 00:03:47.913 sys 0m0.033s 00:03:47.913 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:47.913 17:07:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.913 17:07:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:47.913 17:07:30 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:47.913 17:07:30 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:47.913 17:07:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.913 ************************************ 00:03:47.913 START TEST rpc_plugins 00:03:47.913 ************************************ 00:03:47.913 17:07:30 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:03:47.913 17:07:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:47.913 17:07:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.913 17:07:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.913 17:07:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.913 17:07:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:47.913 17:07:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:47.913 17:07:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.913 17:07:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.913 17:07:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.913 17:07:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:47.913 { 00:03:47.913 "name": "Malloc1", 00:03:47.913 "aliases": [ 00:03:47.913 "8797ca5b-519a-43ef-90e8-1106c8575245" 00:03:47.913 ], 00:03:47.913 "product_name": "Malloc disk", 00:03:47.913 "block_size": 4096, 00:03:47.913 "num_blocks": 256, 00:03:47.913 "uuid": "8797ca5b-519a-43ef-90e8-1106c8575245", 00:03:47.913 "assigned_rate_limits": { 00:03:47.913 "rw_ios_per_sec": 0, 00:03:47.913 "rw_mbytes_per_sec": 0, 00:03:47.913 "r_mbytes_per_sec": 0, 00:03:47.913 "w_mbytes_per_sec": 0 00:03:47.913 }, 00:03:47.913 "claimed": false, 00:03:47.913 "zoned": false, 00:03:47.913 "supported_io_types": { 00:03:47.913 "read": true, 00:03:47.913 "write": true, 00:03:47.913 "unmap": true, 00:03:47.913 "flush": true, 00:03:47.913 "reset": true, 00:03:47.913 "nvme_admin": false, 00:03:47.913 "nvme_io": false, 00:03:47.913 "nvme_io_md": false, 00:03:47.913 "write_zeroes": true, 00:03:47.913 "zcopy": true, 00:03:47.913 "get_zone_info": false, 00:03:47.913 "zone_management": false, 00:03:47.913 "zone_append": false, 00:03:47.913 "compare": false, 00:03:47.913 "compare_and_write": false, 00:03:47.913 "abort": true, 00:03:47.913 "seek_hole": false, 00:03:47.913 "seek_data": false, 00:03:47.913 "copy": true, 00:03:47.913 "nvme_iov_md": false 00:03:47.913 }, 00:03:47.913 "memory_domains": [ 00:03:47.913 { 00:03:47.913 "dma_device_id": "system", 00:03:47.913 "dma_device_type": 1 00:03:47.913 }, 00:03:47.913 { 00:03:47.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.913 "dma_device_type": 2 00:03:47.913 } 00:03:47.913 ], 00:03:47.913 "driver_specific": {} 00:03:47.913 } 00:03:47.913 ]' 00:03:47.913 17:07:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:47.913 17:07:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:47.913 17:07:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:47.913 17:07:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.913 17:07:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.913 17:07:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.913 17:07:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:47.913 17:07:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.913 17:07:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.913 17:07:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.913 17:07:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:47.913 17:07:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:47.913 ************************************ 00:03:47.913 END TEST rpc_plugins 00:03:47.913 ************************************ 00:03:47.913 17:07:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:47.913 00:03:47.913 real 0m0.106s 00:03:47.913 user 0m0.066s 00:03:47.913 sys 0m0.011s 00:03:47.914 17:07:30 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:47.914 17:07:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:48.171 17:07:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:48.171 17:07:30 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:48.171 17:07:30 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:48.171 17:07:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.171 ************************************ 00:03:48.171 START TEST rpc_trace_cmd_test 00:03:48.171 ************************************ 00:03:48.171 17:07:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:03:48.171 17:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:48.171 17:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:48.171 17:07:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.171 17:07:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:48.171 17:07:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.171 17:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:48.171 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57066", 00:03:48.171 "tpoint_group_mask": "0x8", 00:03:48.171 "iscsi_conn": { 00:03:48.171 "mask": "0x2", 00:03:48.171 "tpoint_mask": "0x0" 00:03:48.171 }, 00:03:48.171 "scsi": { 00:03:48.171 "mask": "0x4", 00:03:48.171 "tpoint_mask": "0x0" 00:03:48.171 }, 00:03:48.171 "bdev": { 00:03:48.171 "mask": "0x8", 00:03:48.171 "tpoint_mask": "0xffffffffffffffff" 00:03:48.171 }, 00:03:48.171 "nvmf_rdma": { 00:03:48.171 "mask": "0x10", 00:03:48.171 "tpoint_mask": "0x0" 00:03:48.171 }, 00:03:48.171 "nvmf_tcp": { 00:03:48.171 "mask": "0x20", 00:03:48.171 "tpoint_mask": "0x0" 00:03:48.171 }, 00:03:48.171 "ftl": { 00:03:48.171 "mask": "0x40", 00:03:48.171 "tpoint_mask": "0x0" 00:03:48.171 }, 00:03:48.171 "blobfs": { 00:03:48.171 "mask": "0x80", 00:03:48.171 "tpoint_mask": "0x0" 00:03:48.171 }, 00:03:48.171 "dsa": { 00:03:48.171 "mask": "0x200", 00:03:48.171 "tpoint_mask": "0x0" 00:03:48.171 }, 00:03:48.171 "thread": { 00:03:48.171 "mask": "0x400", 00:03:48.171 "tpoint_mask": "0x0" 00:03:48.171 }, 00:03:48.171 "nvme_pcie": { 00:03:48.171 "mask": "0x800", 00:03:48.171 "tpoint_mask": "0x0" 00:03:48.172 }, 00:03:48.172 "iaa": { 00:03:48.172 "mask": "0x1000", 00:03:48.172 "tpoint_mask": "0x0" 00:03:48.172 }, 00:03:48.172 "nvme_tcp": { 00:03:48.172 "mask": "0x2000", 00:03:48.172 "tpoint_mask": "0x0" 00:03:48.172 }, 00:03:48.172 "bdev_nvme": { 00:03:48.172 "mask": "0x4000", 00:03:48.172 "tpoint_mask": "0x0" 00:03:48.172 }, 00:03:48.172 "sock": { 00:03:48.172 "mask": "0x8000", 00:03:48.172 "tpoint_mask": "0x0" 00:03:48.172 }, 00:03:48.172 "blob": { 00:03:48.172 "mask": "0x10000", 00:03:48.172 "tpoint_mask": "0x0" 00:03:48.172 }, 00:03:48.172 "bdev_raid": { 00:03:48.172 "mask": "0x20000", 00:03:48.172 "tpoint_mask": "0x0" 00:03:48.172 }, 00:03:48.172 "scheduler": { 00:03:48.172 "mask": "0x40000", 00:03:48.172 "tpoint_mask": "0x0" 00:03:48.172 } 00:03:48.172 }' 00:03:48.172 17:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:48.172 17:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:48.172 17:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:48.172 17:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:48.172 17:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:48.172 17:07:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:48.172 17:07:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:48.172 17:07:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:48.172 17:07:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:48.172 ************************************ 00:03:48.172 END TEST rpc_trace_cmd_test 00:03:48.172 ************************************ 00:03:48.172 17:07:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:48.172 00:03:48.172 real 0m0.173s 00:03:48.172 user 0m0.139s 00:03:48.172 sys 0m0.026s 00:03:48.172 17:07:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:48.172 17:07:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:48.172 17:07:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:48.172 17:07:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:48.172 17:07:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:48.172 17:07:31 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:48.172 17:07:31 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:48.172 17:07:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.172 ************************************ 00:03:48.172 START TEST rpc_daemon_integrity 00:03:48.172 ************************************ 00:03:48.172 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:48.172 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:48.172 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.172 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.172 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.172 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:48.172 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:48.430 { 00:03:48.430 "name": "Malloc2", 00:03:48.430 "aliases": [ 00:03:48.430 "d13b8ec0-918e-40b6-b818-6cdde241af12" 00:03:48.430 ], 00:03:48.430 "product_name": "Malloc disk", 00:03:48.430 "block_size": 512, 00:03:48.430 "num_blocks": 16384, 00:03:48.430 "uuid": "d13b8ec0-918e-40b6-b818-6cdde241af12", 00:03:48.430 "assigned_rate_limits": { 00:03:48.430 "rw_ios_per_sec": 0, 00:03:48.430 "rw_mbytes_per_sec": 0, 00:03:48.430 "r_mbytes_per_sec": 0, 00:03:48.430 "w_mbytes_per_sec": 0 00:03:48.430 }, 00:03:48.430 "claimed": false, 00:03:48.430 "zoned": false, 00:03:48.430 "supported_io_types": { 00:03:48.430 "read": true, 00:03:48.430 "write": true, 00:03:48.430 "unmap": true, 00:03:48.430 "flush": true, 00:03:48.430 "reset": true, 00:03:48.430 "nvme_admin": false, 00:03:48.430 "nvme_io": false, 00:03:48.430 "nvme_io_md": false, 00:03:48.430 "write_zeroes": true, 00:03:48.430 "zcopy": true, 00:03:48.430 "get_zone_info": false, 00:03:48.430 "zone_management": false, 00:03:48.430 "zone_append": false, 00:03:48.430 "compare": false, 00:03:48.430 "compare_and_write": false, 00:03:48.430 "abort": true, 00:03:48.430 "seek_hole": false, 00:03:48.430 "seek_data": false, 00:03:48.430 "copy": true, 00:03:48.430 "nvme_iov_md": false 00:03:48.430 }, 00:03:48.430 "memory_domains": [ 00:03:48.430 { 00:03:48.430 "dma_device_id": "system", 00:03:48.430 "dma_device_type": 1 00:03:48.430 }, 00:03:48.430 { 00:03:48.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.430 "dma_device_type": 2 00:03:48.430 } 00:03:48.430 ], 00:03:48.430 "driver_specific": {} 00:03:48.430 } 00:03:48.430 ]' 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.430 [2024-10-30 17:07:31.220187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:48.430 [2024-10-30 17:07:31.220260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:48.430 [2024-10-30 17:07:31.220280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:03:48.430 [2024-10-30 17:07:31.220292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:48.430 [2024-10-30 17:07:31.222429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:48.430 [2024-10-30 17:07:31.222561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:48.430 Passthru0 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.430 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:48.430 { 00:03:48.430 "name": "Malloc2", 00:03:48.430 "aliases": [ 00:03:48.430 "d13b8ec0-918e-40b6-b818-6cdde241af12" 00:03:48.430 ], 00:03:48.430 "product_name": "Malloc disk", 00:03:48.430 "block_size": 512, 00:03:48.430 "num_blocks": 16384, 00:03:48.430 "uuid": "d13b8ec0-918e-40b6-b818-6cdde241af12", 00:03:48.430 "assigned_rate_limits": { 00:03:48.430 "rw_ios_per_sec": 0, 00:03:48.430 "rw_mbytes_per_sec": 0, 00:03:48.430 "r_mbytes_per_sec": 0, 00:03:48.430 "w_mbytes_per_sec": 0 00:03:48.430 }, 00:03:48.430 "claimed": true, 00:03:48.430 "claim_type": "exclusive_write", 00:03:48.430 "zoned": false, 00:03:48.430 "supported_io_types": { 00:03:48.430 "read": true, 00:03:48.430 "write": true, 00:03:48.430 "unmap": true, 00:03:48.431 "flush": true, 00:03:48.431 "reset": true, 00:03:48.431 "nvme_admin": false, 00:03:48.431 "nvme_io": false, 00:03:48.431 "nvme_io_md": false, 00:03:48.431 "write_zeroes": true, 00:03:48.431 "zcopy": true, 00:03:48.431 "get_zone_info": false, 00:03:48.431 "zone_management": false, 00:03:48.431 "zone_append": false, 00:03:48.431 "compare": false, 00:03:48.431 "compare_and_write": false, 00:03:48.431 "abort": true, 00:03:48.431 "seek_hole": false, 00:03:48.431 "seek_data": false, 00:03:48.431 "copy": true, 00:03:48.431 "nvme_iov_md": false 00:03:48.431 }, 00:03:48.431 "memory_domains": [ 00:03:48.431 { 00:03:48.431 "dma_device_id": "system", 00:03:48.431 "dma_device_type": 1 00:03:48.431 }, 00:03:48.431 { 00:03:48.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.431 "dma_device_type": 2 00:03:48.431 } 00:03:48.431 ], 00:03:48.431 "driver_specific": {} 00:03:48.431 }, 00:03:48.431 { 00:03:48.431 "name": "Passthru0", 00:03:48.431 "aliases": [ 00:03:48.431 "785101ed-1ebb-541e-8580-3c656e1fe42f" 00:03:48.431 ], 00:03:48.431 "product_name": "passthru", 00:03:48.431 "block_size": 512, 00:03:48.431 "num_blocks": 16384, 00:03:48.431 "uuid": "785101ed-1ebb-541e-8580-3c656e1fe42f", 00:03:48.431 "assigned_rate_limits": { 00:03:48.431 "rw_ios_per_sec": 0, 00:03:48.431 "rw_mbytes_per_sec": 0, 00:03:48.431 "r_mbytes_per_sec": 0, 00:03:48.431 "w_mbytes_per_sec": 0 00:03:48.431 }, 00:03:48.431 "claimed": false, 00:03:48.431 "zoned": false, 00:03:48.431 "supported_io_types": { 00:03:48.431 "read": true, 00:03:48.431 "write": true, 00:03:48.431 "unmap": true, 00:03:48.431 "flush": true, 00:03:48.431 "reset": true, 00:03:48.431 "nvme_admin": false, 00:03:48.431 "nvme_io": false, 00:03:48.431 "nvme_io_md": false, 00:03:48.431 "write_zeroes": true, 00:03:48.431 "zcopy": true, 00:03:48.431 "get_zone_info": false, 00:03:48.431 "zone_management": false, 00:03:48.431 "zone_append": false, 00:03:48.431 "compare": false, 00:03:48.431 "compare_and_write": false, 00:03:48.431 "abort": true, 00:03:48.431 "seek_hole": false, 00:03:48.431 "seek_data": false, 00:03:48.431 "copy": true, 00:03:48.431 "nvme_iov_md": false 00:03:48.431 }, 00:03:48.431 "memory_domains": [ 00:03:48.431 { 00:03:48.431 "dma_device_id": "system", 00:03:48.431 "dma_device_type": 1 00:03:48.431 }, 00:03:48.431 { 00:03:48.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.431 "dma_device_type": 2 00:03:48.431 } 00:03:48.431 ], 00:03:48.431 "driver_specific": { 00:03:48.431 "passthru": { 00:03:48.431 "name": "Passthru0", 00:03:48.431 "base_bdev_name": "Malloc2" 00:03:48.431 } 00:03:48.431 } 00:03:48.431 } 00:03:48.431 ]' 00:03:48.431 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:48.431 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:48.431 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:48.431 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.431 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.431 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.431 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:48.431 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.431 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.431 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.431 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:48.431 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.431 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.431 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.431 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:48.431 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:48.431 ************************************ 00:03:48.431 END TEST rpc_daemon_integrity 00:03:48.431 ************************************ 00:03:48.431 17:07:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:48.431 00:03:48.431 real 0m0.237s 00:03:48.431 user 0m0.131s 00:03:48.431 sys 0m0.030s 00:03:48.431 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:48.431 17:07:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.431 17:07:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:48.431 17:07:31 rpc -- rpc/rpc.sh@84 -- # killprocess 57066 00:03:48.431 17:07:31 rpc -- common/autotest_common.sh@952 -- # '[' -z 57066 ']' 00:03:48.431 17:07:31 rpc -- common/autotest_common.sh@956 -- # kill -0 57066 00:03:48.431 17:07:31 rpc -- common/autotest_common.sh@957 -- # uname 00:03:48.431 17:07:31 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:48.431 17:07:31 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57066 00:03:48.689 killing process with pid 57066 00:03:48.689 17:07:31 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:48.689 17:07:31 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:48.689 17:07:31 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57066' 00:03:48.689 17:07:31 rpc -- common/autotest_common.sh@971 -- # kill 57066 00:03:48.689 17:07:31 rpc -- common/autotest_common.sh@976 -- # wait 57066 00:03:50.132 ************************************ 00:03:50.132 END TEST rpc 00:03:50.132 ************************************ 00:03:50.132 00:03:50.132 real 0m3.412s 00:03:50.132 user 0m3.855s 00:03:50.132 sys 0m0.551s 00:03:50.132 17:07:32 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:50.132 17:07:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.132 17:07:32 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:50.132 17:07:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:50.132 17:07:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:50.132 17:07:32 -- common/autotest_common.sh@10 -- # set +x 00:03:50.132 ************************************ 00:03:50.132 START TEST skip_rpc 00:03:50.132 ************************************ 00:03:50.132 17:07:32 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:50.132 * Looking for test storage... 00:03:50.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:50.132 17:07:32 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:50.132 17:07:32 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:50.132 17:07:32 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:50.132 17:07:33 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.132 17:07:33 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:50.132 17:07:33 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.132 17:07:33 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:50.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.132 --rc genhtml_branch_coverage=1 00:03:50.132 --rc genhtml_function_coverage=1 00:03:50.132 --rc genhtml_legend=1 00:03:50.132 --rc geninfo_all_blocks=1 00:03:50.132 --rc geninfo_unexecuted_blocks=1 00:03:50.132 00:03:50.132 ' 00:03:50.132 17:07:33 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:50.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.132 --rc genhtml_branch_coverage=1 00:03:50.132 --rc genhtml_function_coverage=1 00:03:50.132 --rc genhtml_legend=1 00:03:50.132 --rc geninfo_all_blocks=1 00:03:50.132 --rc geninfo_unexecuted_blocks=1 00:03:50.132 00:03:50.132 ' 00:03:50.132 17:07:33 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:50.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.132 --rc genhtml_branch_coverage=1 00:03:50.132 --rc genhtml_function_coverage=1 00:03:50.132 --rc genhtml_legend=1 00:03:50.132 --rc geninfo_all_blocks=1 00:03:50.132 --rc geninfo_unexecuted_blocks=1 00:03:50.132 00:03:50.132 ' 00:03:50.132 17:07:33 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:50.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.132 --rc genhtml_branch_coverage=1 00:03:50.132 --rc genhtml_function_coverage=1 00:03:50.132 --rc genhtml_legend=1 00:03:50.132 --rc geninfo_all_blocks=1 00:03:50.132 --rc geninfo_unexecuted_blocks=1 00:03:50.132 00:03:50.132 ' 00:03:50.132 17:07:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:50.132 17:07:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:50.132 17:07:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:50.132 17:07:33 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:50.132 17:07:33 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:50.132 17:07:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.132 ************************************ 00:03:50.132 START TEST skip_rpc 00:03:50.132 ************************************ 00:03:50.132 17:07:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:03:50.132 17:07:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57273 00:03:50.132 17:07:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:50.132 17:07:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:50.132 17:07:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:50.132 [2024-10-30 17:07:33.108843] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:03:50.132 [2024-10-30 17:07:33.109101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57273 ] 00:03:50.390 [2024-10-30 17:07:33.264046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.390 [2024-10-30 17:07:33.344244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57273 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57273 ']' 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57273 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57273 00:03:55.652 killing process with pid 57273 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57273' 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57273 00:03:55.652 17:07:38 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57273 00:03:56.585 ************************************ 00:03:56.585 END TEST skip_rpc 00:03:56.585 ************************************ 00:03:56.585 00:03:56.585 real 0m6.202s 00:03:56.585 user 0m5.850s 00:03:56.585 sys 0m0.256s 00:03:56.585 17:07:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:56.586 17:07:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.586 17:07:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:56.586 17:07:39 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:56.586 17:07:39 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:56.586 17:07:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.586 ************************************ 00:03:56.586 START TEST skip_rpc_with_json 00:03:56.586 ************************************ 00:03:56.586 17:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:03:56.586 17:07:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:56.586 17:07:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57366 00:03:56.586 17:07:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:56.586 17:07:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57366 00:03:56.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.586 17:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57366 ']' 00:03:56.586 17:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.586 17:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:56.586 17:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.586 17:07:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:03:56.586 17:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:56.586 17:07:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.586 [2024-10-30 17:07:39.339565] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:03:56.586 [2024-10-30 17:07:39.339818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57366 ] 00:03:56.586 [2024-10-30 17:07:39.489259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.846 [2024-10-30 17:07:39.570341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.414 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:57.414 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:03:57.414 17:07:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:57.414 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.414 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:57.414 [2024-10-30 17:07:40.180781] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:57.414 request: 00:03:57.414 { 00:03:57.414 "trtype": "tcp", 00:03:57.414 "method": "nvmf_get_transports", 00:03:57.414 "req_id": 1 00:03:57.414 } 00:03:57.414 Got JSON-RPC error response 00:03:57.414 response: 00:03:57.414 { 00:03:57.414 "code": -19, 00:03:57.414 "message": "No such device" 00:03:57.414 } 00:03:57.414 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:57.414 17:07:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:57.415 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.415 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:57.415 [2024-10-30 17:07:40.188892] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:57.415 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.415 17:07:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:57.415 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.415 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:57.415 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.415 17:07:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:57.415 { 00:03:57.415 "subsystems": [ 00:03:57.415 { 00:03:57.415 "subsystem": "fsdev", 00:03:57.415 "config": [ 00:03:57.415 { 00:03:57.415 "method": "fsdev_set_opts", 00:03:57.415 "params": { 00:03:57.415 "fsdev_io_pool_size": 65535, 00:03:57.415 "fsdev_io_cache_size": 256 00:03:57.415 } 00:03:57.415 } 00:03:57.415 ] 00:03:57.415 }, 00:03:57.415 { 00:03:57.415 "subsystem": "keyring", 00:03:57.415 "config": [] 00:03:57.415 }, 00:03:57.415 { 00:03:57.415 "subsystem": "iobuf", 00:03:57.415 "config": [ 00:03:57.415 { 00:03:57.415 "method": "iobuf_set_options", 00:03:57.415 "params": { 00:03:57.415 "small_pool_count": 8192, 00:03:57.415 "large_pool_count": 1024, 00:03:57.415 "small_bufsize": 8192, 00:03:57.415 "large_bufsize": 135168, 00:03:57.415 "enable_numa": false 00:03:57.415 } 00:03:57.415 } 00:03:57.415 ] 00:03:57.415 }, 00:03:57.415 { 00:03:57.415 "subsystem": "sock", 00:03:57.415 "config": [ 00:03:57.415 { 00:03:57.415 "method": "sock_set_default_impl", 00:03:57.415 "params": { 00:03:57.415 "impl_name": "posix" 00:03:57.415 } 00:03:57.415 }, 00:03:57.415 { 00:03:57.415 "method": "sock_impl_set_options", 00:03:57.415 "params": { 00:03:57.415 "impl_name": "ssl", 00:03:57.415 "recv_buf_size": 4096, 00:03:57.415 "send_buf_size": 4096, 00:03:57.415 "enable_recv_pipe": true, 00:03:57.415 "enable_quickack": false, 00:03:57.415 "enable_placement_id": 0, 00:03:57.415 "enable_zerocopy_send_server": true, 00:03:57.415 "enable_zerocopy_send_client": false, 00:03:57.415 "zerocopy_threshold": 0, 00:03:57.415 "tls_version": 0, 00:03:57.415 "enable_ktls": false 00:03:57.415 } 00:03:57.415 }, 00:03:57.415 { 00:03:57.415 "method": "sock_impl_set_options", 00:03:57.415 "params": { 00:03:57.415 "impl_name": "posix", 00:03:57.415 "recv_buf_size": 2097152, 00:03:57.415 "send_buf_size": 2097152, 00:03:57.415 "enable_recv_pipe": true, 00:03:57.415 "enable_quickack": false, 00:03:57.415 "enable_placement_id": 0, 00:03:57.415 "enable_zerocopy_send_server": true, 00:03:57.415 "enable_zerocopy_send_client": false, 00:03:57.415 "zerocopy_threshold": 0, 00:03:57.415 "tls_version": 0, 00:03:57.415 "enable_ktls": false 00:03:57.415 } 00:03:57.415 } 00:03:57.415 ] 00:03:57.415 }, 00:03:57.415 { 00:03:57.415 "subsystem": "vmd", 00:03:57.415 "config": [] 00:03:57.415 }, 00:03:57.415 { 00:03:57.415 "subsystem": "accel", 00:03:57.415 "config": [ 00:03:57.415 { 00:03:57.415 "method": "accel_set_options", 00:03:57.415 "params": { 00:03:57.415 "small_cache_size": 128, 00:03:57.415 "large_cache_size": 16, 00:03:57.415 "task_count": 2048, 00:03:57.415 "sequence_count": 2048, 00:03:57.415 "buf_count": 2048 00:03:57.415 } 00:03:57.415 } 00:03:57.415 ] 00:03:57.415 }, 00:03:57.415 { 00:03:57.415 "subsystem": "bdev", 00:03:57.415 "config": [ 00:03:57.415 { 00:03:57.415 "method": "bdev_set_options", 00:03:57.415 "params": { 00:03:57.415 "bdev_io_pool_size": 65535, 00:03:57.415 "bdev_io_cache_size": 256, 00:03:57.415 "bdev_auto_examine": true, 00:03:57.415 "iobuf_small_cache_size": 128, 00:03:57.415 "iobuf_large_cache_size": 16 00:03:57.415 } 00:03:57.415 }, 00:03:57.415 { 00:03:57.415 "method": "bdev_raid_set_options", 00:03:57.415 "params": { 00:03:57.415 "process_window_size_kb": 1024, 00:03:57.415 "process_max_bandwidth_mb_sec": 0 00:03:57.415 } 00:03:57.415 }, 00:03:57.415 { 00:03:57.415 "method": "bdev_iscsi_set_options", 00:03:57.415 "params": { 00:03:57.415 "timeout_sec": 30 00:03:57.415 } 00:03:57.415 }, 00:03:57.415 { 00:03:57.415 "method": "bdev_nvme_set_options", 00:03:57.415 "params": { 00:03:57.415 "action_on_timeout": "none", 00:03:57.415 "timeout_us": 0, 00:03:57.415 "timeout_admin_us": 0, 00:03:57.415 "keep_alive_timeout_ms": 10000, 00:03:57.415 "arbitration_burst": 0, 00:03:57.415 "low_priority_weight": 0, 00:03:57.415 "medium_priority_weight": 0, 00:03:57.415 "high_priority_weight": 0, 00:03:57.415 "nvme_adminq_poll_period_us": 10000, 00:03:57.415 "nvme_ioq_poll_period_us": 0, 00:03:57.415 "io_queue_requests": 0, 00:03:57.415 "delay_cmd_submit": true, 00:03:57.415 "transport_retry_count": 4, 00:03:57.415 "bdev_retry_count": 3, 00:03:57.415 "transport_ack_timeout": 0, 00:03:57.415 "ctrlr_loss_timeout_sec": 0, 00:03:57.415 "reconnect_delay_sec": 0, 00:03:57.415 "fast_io_fail_timeout_sec": 0, 00:03:57.415 "disable_auto_failback": false, 00:03:57.415 "generate_uuids": false, 00:03:57.415 "transport_tos": 0, 00:03:57.415 "nvme_error_stat": false, 00:03:57.415 "rdma_srq_size": 0, 00:03:57.415 "io_path_stat": false, 00:03:57.415 "allow_accel_sequence": false, 00:03:57.415 "rdma_max_cq_size": 0, 00:03:57.415 "rdma_cm_event_timeout_ms": 0, 00:03:57.415 "dhchap_digests": [ 00:03:57.415 "sha256", 00:03:57.415 "sha384", 00:03:57.415 "sha512" 00:03:57.415 ], 00:03:57.415 "dhchap_dhgroups": [ 00:03:57.415 "null", 00:03:57.415 "ffdhe2048", 00:03:57.415 "ffdhe3072", 00:03:57.415 "ffdhe4096", 00:03:57.415 "ffdhe6144", 00:03:57.415 "ffdhe8192" 00:03:57.415 ] 00:03:57.415 } 00:03:57.415 }, 00:03:57.415 { 00:03:57.415 "method": "bdev_nvme_set_hotplug", 00:03:57.415 "params": { 00:03:57.415 "period_us": 100000, 00:03:57.415 "enable": false 00:03:57.415 } 00:03:57.415 }, 00:03:57.415 { 00:03:57.415 "method": "bdev_wait_for_examine" 00:03:57.415 } 00:03:57.415 ] 00:03:57.415 }, 00:03:57.416 { 00:03:57.416 "subsystem": "scsi", 00:03:57.416 "config": null 00:03:57.416 }, 00:03:57.416 { 00:03:57.416 "subsystem": "scheduler", 00:03:57.416 "config": [ 00:03:57.416 { 00:03:57.416 "method": "framework_set_scheduler", 00:03:57.416 "params": { 00:03:57.416 "name": "static" 00:03:57.416 } 00:03:57.416 } 00:03:57.416 ] 00:03:57.416 }, 00:03:57.416 { 00:03:57.416 "subsystem": "vhost_scsi", 00:03:57.416 "config": [] 00:03:57.416 }, 00:03:57.416 { 00:03:57.416 "subsystem": "vhost_blk", 00:03:57.416 "config": [] 00:03:57.416 }, 00:03:57.416 { 00:03:57.416 "subsystem": "ublk", 00:03:57.416 "config": [] 00:03:57.416 }, 00:03:57.416 { 00:03:57.416 "subsystem": "nbd", 00:03:57.416 "config": [] 00:03:57.416 }, 00:03:57.416 { 00:03:57.416 "subsystem": "nvmf", 00:03:57.416 "config": [ 00:03:57.416 { 00:03:57.416 "method": "nvmf_set_config", 00:03:57.416 "params": { 00:03:57.416 "discovery_filter": "match_any", 00:03:57.416 "admin_cmd_passthru": { 00:03:57.416 "identify_ctrlr": false 00:03:57.416 }, 00:03:57.416 "dhchap_digests": [ 00:03:57.416 "sha256", 00:03:57.416 "sha384", 00:03:57.416 "sha512" 00:03:57.416 ], 00:03:57.416 "dhchap_dhgroups": [ 00:03:57.416 "null", 00:03:57.416 "ffdhe2048", 00:03:57.416 "ffdhe3072", 00:03:57.416 "ffdhe4096", 00:03:57.416 "ffdhe6144", 00:03:57.416 "ffdhe8192" 00:03:57.416 ] 00:03:57.416 } 00:03:57.416 }, 00:03:57.416 { 00:03:57.416 "method": "nvmf_set_max_subsystems", 00:03:57.416 "params": { 00:03:57.416 "max_subsystems": 1024 00:03:57.416 } 00:03:57.416 }, 00:03:57.416 { 00:03:57.416 "method": "nvmf_set_crdt", 00:03:57.416 "params": { 00:03:57.416 "crdt1": 0, 00:03:57.416 "crdt2": 0, 00:03:57.416 "crdt3": 0 00:03:57.416 } 00:03:57.416 }, 00:03:57.416 { 00:03:57.416 "method": "nvmf_create_transport", 00:03:57.416 "params": { 00:03:57.416 "trtype": "TCP", 00:03:57.416 "max_queue_depth": 128, 00:03:57.416 "max_io_qpairs_per_ctrlr": 127, 00:03:57.416 "in_capsule_data_size": 4096, 00:03:57.416 "max_io_size": 131072, 00:03:57.416 "io_unit_size": 131072, 00:03:57.416 "max_aq_depth": 128, 00:03:57.416 "num_shared_buffers": 511, 00:03:57.416 "buf_cache_size": 4294967295, 00:03:57.416 "dif_insert_or_strip": false, 00:03:57.416 "zcopy": false, 00:03:57.416 "c2h_success": true, 00:03:57.416 "sock_priority": 0, 00:03:57.416 "abort_timeout_sec": 1, 00:03:57.416 "ack_timeout": 0, 00:03:57.416 "data_wr_pool_size": 0 00:03:57.416 } 00:03:57.416 } 00:03:57.416 ] 00:03:57.416 }, 00:03:57.416 { 00:03:57.416 "subsystem": "iscsi", 00:03:57.416 "config": [ 00:03:57.416 { 00:03:57.416 "method": "iscsi_set_options", 00:03:57.416 "params": { 00:03:57.416 "node_base": "iqn.2016-06.io.spdk", 00:03:57.416 "max_sessions": 128, 00:03:57.416 "max_connections_per_session": 2, 00:03:57.416 "max_queue_depth": 64, 00:03:57.416 "default_time2wait": 2, 00:03:57.416 "default_time2retain": 20, 00:03:57.416 "first_burst_length": 8192, 00:03:57.416 "immediate_data": true, 00:03:57.416 "allow_duplicated_isid": false, 00:03:57.416 "error_recovery_level": 0, 00:03:57.416 "nop_timeout": 60, 00:03:57.416 "nop_in_interval": 30, 00:03:57.416 "disable_chap": false, 00:03:57.416 "require_chap": false, 00:03:57.416 "mutual_chap": false, 00:03:57.416 "chap_group": 0, 00:03:57.416 "max_large_datain_per_connection": 64, 00:03:57.416 "max_r2t_per_connection": 4, 00:03:57.416 "pdu_pool_size": 36864, 00:03:57.416 "immediate_data_pool_size": 16384, 00:03:57.416 "data_out_pool_size": 2048 00:03:57.416 } 00:03:57.416 } 00:03:57.416 ] 00:03:57.416 } 00:03:57.416 ] 00:03:57.416 } 00:03:57.416 17:07:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:57.416 17:07:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57366 00:03:57.416 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57366 ']' 00:03:57.416 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57366 00:03:57.416 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:03:57.416 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:57.416 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57366 00:03:57.416 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:57.416 killing process with pid 57366 00:03:57.416 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:57.416 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57366' 00:03:57.416 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57366 00:03:57.416 17:07:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57366 00:03:58.790 17:07:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57406 00:03:58.790 17:07:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:58.790 17:07:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:04.055 17:07:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57406 00:04:04.055 17:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57406 ']' 00:04:04.055 17:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57406 00:04:04.055 17:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:04.055 17:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:04.055 17:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57406 00:04:04.055 killing process with pid 57406 00:04:04.055 17:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:04.055 17:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:04.055 17:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57406' 00:04:04.055 17:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57406 00:04:04.055 17:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57406 00:04:04.991 17:07:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:04.991 17:07:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:04.991 ************************************ 00:04:04.991 END TEST skip_rpc_with_json 00:04:04.991 ************************************ 00:04:04.991 00:04:04.991 real 0m8.664s 00:04:04.991 user 0m8.312s 00:04:04.991 sys 0m0.564s 00:04:04.991 17:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:04.991 17:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.991 17:07:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:04.991 17:07:47 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:04.991 17:07:47 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:04.991 17:07:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.250 ************************************ 00:04:05.250 START TEST skip_rpc_with_delay 00:04:05.250 ************************************ 00:04:05.250 17:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:05.250 17:07:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:05.250 17:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:05.250 17:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:05.250 17:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:05.250 17:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:05.250 17:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:05.250 17:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:05.250 17:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:05.250 17:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:05.250 17:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:05.250 17:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:05.250 17:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:05.250 [2024-10-30 17:07:48.049048] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:05.250 17:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:05.250 17:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:05.250 17:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:05.250 17:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:05.250 00:04:05.250 real 0m0.119s 00:04:05.250 user 0m0.064s 00:04:05.250 sys 0m0.054s 00:04:05.250 ************************************ 00:04:05.250 END TEST skip_rpc_with_delay 00:04:05.250 ************************************ 00:04:05.250 17:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:05.250 17:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:05.250 17:07:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:05.250 17:07:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:05.250 17:07:48 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:05.250 17:07:48 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:05.250 17:07:48 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:05.250 17:07:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.250 ************************************ 00:04:05.250 START TEST exit_on_failed_rpc_init 00:04:05.250 ************************************ 00:04:05.250 17:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:05.250 17:07:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57528 00:04:05.250 17:07:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57528 00:04:05.250 17:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57528 ']' 00:04:05.250 17:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.250 17:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:05.250 17:07:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:05.250 17:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.250 17:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:05.250 17:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:05.250 [2024-10-30 17:07:48.209953] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:04:05.250 [2024-10-30 17:07:48.210318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57528 ] 00:04:05.508 [2024-10-30 17:07:48.368455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.508 [2024-10-30 17:07:48.449293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.073 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:06.073 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:06.073 17:07:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.073 17:07:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:06.073 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:06.073 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:06.073 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:06.073 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:06.073 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:06.073 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:06.073 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:06.073 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:06.073 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:06.073 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:06.073 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:06.330 [2024-10-30 17:07:49.127567] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:04:06.330 [2024-10-30 17:07:49.127692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57541 ] 00:04:06.330 [2024-10-30 17:07:49.288627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.588 [2024-10-30 17:07:49.386557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:06.588 [2024-10-30 17:07:49.386629] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:06.588 [2024-10-30 17:07:49.386643] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:06.588 [2024-10-30 17:07:49.386655] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:06.845 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:06.845 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:06.845 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:06.845 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:06.845 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:06.845 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:06.845 17:07:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:06.845 17:07:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57528 00:04:06.845 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57528 ']' 00:04:06.845 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57528 00:04:06.845 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:06.845 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:06.845 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57528 00:04:06.845 killing process with pid 57528 00:04:06.845 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:06.845 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:06.845 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57528' 00:04:06.845 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57528 00:04:06.845 17:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57528 00:04:07.853 00:04:07.853 real 0m2.629s 00:04:07.853 user 0m2.936s 00:04:07.853 sys 0m0.408s 00:04:07.853 17:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:07.853 ************************************ 00:04:07.853 END TEST exit_on_failed_rpc_init 00:04:07.853 ************************************ 00:04:07.853 17:07:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:07.853 17:07:50 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:07.853 00:04:07.853 real 0m17.910s 00:04:07.853 user 0m17.288s 00:04:07.853 sys 0m1.458s 00:04:07.853 17:07:50 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:07.853 ************************************ 00:04:07.853 END TEST skip_rpc 00:04:07.853 ************************************ 00:04:07.853 17:07:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.853 17:07:50 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:07.853 17:07:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:07.853 17:07:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:07.853 17:07:50 -- common/autotest_common.sh@10 -- # set +x 00:04:07.853 ************************************ 00:04:07.853 START TEST rpc_client 00:04:07.853 ************************************ 00:04:07.853 17:07:50 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:08.111 * Looking for test storage... 00:04:08.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:08.111 17:07:50 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:08.111 17:07:50 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:08.111 17:07:50 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:08.111 17:07:50 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.111 17:07:50 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:08.111 17:07:50 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.111 17:07:50 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:08.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.111 --rc genhtml_branch_coverage=1 00:04:08.111 --rc genhtml_function_coverage=1 00:04:08.111 --rc genhtml_legend=1 00:04:08.111 --rc geninfo_all_blocks=1 00:04:08.111 --rc geninfo_unexecuted_blocks=1 00:04:08.111 00:04:08.111 ' 00:04:08.111 17:07:50 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:08.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.111 --rc genhtml_branch_coverage=1 00:04:08.111 --rc genhtml_function_coverage=1 00:04:08.111 --rc genhtml_legend=1 00:04:08.111 --rc geninfo_all_blocks=1 00:04:08.111 --rc geninfo_unexecuted_blocks=1 00:04:08.111 00:04:08.111 ' 00:04:08.111 17:07:50 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:08.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.111 --rc genhtml_branch_coverage=1 00:04:08.111 --rc genhtml_function_coverage=1 00:04:08.111 --rc genhtml_legend=1 00:04:08.111 --rc geninfo_all_blocks=1 00:04:08.111 --rc geninfo_unexecuted_blocks=1 00:04:08.111 00:04:08.111 ' 00:04:08.111 17:07:50 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:08.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.111 --rc genhtml_branch_coverage=1 00:04:08.111 --rc genhtml_function_coverage=1 00:04:08.111 --rc genhtml_legend=1 00:04:08.111 --rc geninfo_all_blocks=1 00:04:08.111 --rc geninfo_unexecuted_blocks=1 00:04:08.111 00:04:08.111 ' 00:04:08.111 17:07:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:08.111 OK 00:04:08.111 17:07:51 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:08.111 00:04:08.111 real 0m0.185s 00:04:08.111 user 0m0.104s 00:04:08.111 sys 0m0.087s 00:04:08.111 17:07:51 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:08.111 17:07:51 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:08.111 ************************************ 00:04:08.111 END TEST rpc_client 00:04:08.111 ************************************ 00:04:08.111 17:07:51 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:08.111 17:07:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:08.111 17:07:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:08.111 17:07:51 -- common/autotest_common.sh@10 -- # set +x 00:04:08.111 ************************************ 00:04:08.111 START TEST json_config 00:04:08.111 ************************************ 00:04:08.111 17:07:51 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:08.111 17:07:51 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:08.111 17:07:51 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:08.111 17:07:51 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:08.371 17:07:51 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:08.371 17:07:51 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.371 17:07:51 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.371 17:07:51 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.371 17:07:51 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.371 17:07:51 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.371 17:07:51 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.371 17:07:51 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.371 17:07:51 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.371 17:07:51 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.371 17:07:51 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.371 17:07:51 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.371 17:07:51 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:08.371 17:07:51 json_config -- scripts/common.sh@345 -- # : 1 00:04:08.371 17:07:51 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.371 17:07:51 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.371 17:07:51 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:08.371 17:07:51 json_config -- scripts/common.sh@353 -- # local d=1 00:04:08.371 17:07:51 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.371 17:07:51 json_config -- scripts/common.sh@355 -- # echo 1 00:04:08.371 17:07:51 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.371 17:07:51 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:08.371 17:07:51 json_config -- scripts/common.sh@353 -- # local d=2 00:04:08.371 17:07:51 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.371 17:07:51 json_config -- scripts/common.sh@355 -- # echo 2 00:04:08.371 17:07:51 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.371 17:07:51 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.371 17:07:51 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.371 17:07:51 json_config -- scripts/common.sh@368 -- # return 0 00:04:08.371 17:07:51 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.371 17:07:51 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:08.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.371 --rc genhtml_branch_coverage=1 00:04:08.371 --rc genhtml_function_coverage=1 00:04:08.371 --rc genhtml_legend=1 00:04:08.371 --rc geninfo_all_blocks=1 00:04:08.371 --rc geninfo_unexecuted_blocks=1 00:04:08.371 00:04:08.371 ' 00:04:08.371 17:07:51 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:08.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.371 --rc genhtml_branch_coverage=1 00:04:08.371 --rc genhtml_function_coverage=1 00:04:08.371 --rc genhtml_legend=1 00:04:08.371 --rc geninfo_all_blocks=1 00:04:08.371 --rc geninfo_unexecuted_blocks=1 00:04:08.371 00:04:08.371 ' 00:04:08.371 17:07:51 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:08.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.371 --rc genhtml_branch_coverage=1 00:04:08.371 --rc genhtml_function_coverage=1 00:04:08.372 --rc genhtml_legend=1 00:04:08.372 --rc geninfo_all_blocks=1 00:04:08.372 --rc geninfo_unexecuted_blocks=1 00:04:08.372 00:04:08.372 ' 00:04:08.372 17:07:51 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:08.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.372 --rc genhtml_branch_coverage=1 00:04:08.372 --rc genhtml_function_coverage=1 00:04:08.372 --rc genhtml_legend=1 00:04:08.372 --rc geninfo_all_blocks=1 00:04:08.372 --rc geninfo_unexecuted_blocks=1 00:04:08.372 00:04:08.372 ' 00:04:08.372 17:07:51 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b3233cb-2bfc-4fea-8c96-11b7d418394c 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=9b3233cb-2bfc-4fea-8c96-11b7d418394c 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:08.372 17:07:51 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:08.372 17:07:51 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:08.372 17:07:51 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:08.372 17:07:51 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:08.372 17:07:51 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.372 17:07:51 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.372 17:07:51 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.372 17:07:51 json_config -- paths/export.sh@5 -- # export PATH 00:04:08.372 17:07:51 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@51 -- # : 0 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:08.372 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:08.372 17:07:51 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:08.372 17:07:51 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:08.372 17:07:51 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:08.372 17:07:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:08.372 17:07:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:08.372 17:07:51 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:08.372 17:07:51 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:08.372 WARNING: No tests are enabled so not running JSON configuration tests 00:04:08.372 17:07:51 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:08.372 00:04:08.372 real 0m0.130s 00:04:08.372 user 0m0.080s 00:04:08.372 sys 0m0.054s 00:04:08.372 17:07:51 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:08.372 17:07:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.372 ************************************ 00:04:08.372 END TEST json_config 00:04:08.372 ************************************ 00:04:08.372 17:07:51 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:08.372 17:07:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:08.372 17:07:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:08.372 17:07:51 -- common/autotest_common.sh@10 -- # set +x 00:04:08.372 ************************************ 00:04:08.372 START TEST json_config_extra_key 00:04:08.372 ************************************ 00:04:08.372 17:07:51 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:08.372 17:07:51 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:08.372 17:07:51 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:08.372 17:07:51 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:08.372 17:07:51 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.372 17:07:51 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:08.372 17:07:51 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.372 17:07:51 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:08.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.372 --rc genhtml_branch_coverage=1 00:04:08.372 --rc genhtml_function_coverage=1 00:04:08.372 --rc genhtml_legend=1 00:04:08.372 --rc geninfo_all_blocks=1 00:04:08.372 --rc geninfo_unexecuted_blocks=1 00:04:08.372 00:04:08.372 ' 00:04:08.372 17:07:51 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:08.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.372 --rc genhtml_branch_coverage=1 00:04:08.372 --rc genhtml_function_coverage=1 00:04:08.372 --rc genhtml_legend=1 00:04:08.372 --rc geninfo_all_blocks=1 00:04:08.372 --rc geninfo_unexecuted_blocks=1 00:04:08.372 00:04:08.372 ' 00:04:08.372 17:07:51 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:08.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.372 --rc genhtml_branch_coverage=1 00:04:08.372 --rc genhtml_function_coverage=1 00:04:08.372 --rc genhtml_legend=1 00:04:08.372 --rc geninfo_all_blocks=1 00:04:08.372 --rc geninfo_unexecuted_blocks=1 00:04:08.372 00:04:08.372 ' 00:04:08.372 17:07:51 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:08.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.373 --rc genhtml_branch_coverage=1 00:04:08.373 --rc genhtml_function_coverage=1 00:04:08.373 --rc genhtml_legend=1 00:04:08.373 --rc geninfo_all_blocks=1 00:04:08.373 --rc geninfo_unexecuted_blocks=1 00:04:08.373 00:04:08.373 ' 00:04:08.373 17:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b3233cb-2bfc-4fea-8c96-11b7d418394c 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=9b3233cb-2bfc-4fea-8c96-11b7d418394c 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:08.373 17:07:51 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:08.373 17:07:51 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:08.373 17:07:51 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:08.373 17:07:51 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:08.373 17:07:51 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.373 17:07:51 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.373 17:07:51 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.373 17:07:51 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:08.373 17:07:51 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:08.373 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:08.373 17:07:51 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:08.373 17:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:08.373 17:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:08.373 17:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:08.373 17:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:08.373 17:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:08.373 17:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:08.373 17:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:08.373 17:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:08.373 17:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:08.373 17:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:08.373 17:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:08.373 INFO: launching applications... 00:04:08.373 Waiting for target to run... 00:04:08.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:08.373 17:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:08.373 17:07:51 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:08.373 17:07:51 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:08.373 17:07:51 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:08.373 17:07:51 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:08.373 17:07:51 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:08.373 17:07:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:08.373 17:07:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:08.373 17:07:51 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57734 00:04:08.373 17:07:51 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:08.373 17:07:51 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:08.373 17:07:51 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57734 /var/tmp/spdk_tgt.sock 00:04:08.373 17:07:51 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57734 ']' 00:04:08.373 17:07:51 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:08.373 17:07:51 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:08.373 17:07:51 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:08.373 17:07:51 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:08.373 17:07:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:08.631 [2024-10-30 17:07:51.420916] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:04:08.631 [2024-10-30 17:07:51.421683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57734 ] 00:04:08.889 [2024-10-30 17:07:51.739883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.889 [2024-10-30 17:07:51.829416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.456 17:07:52 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:09.456 17:07:52 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:09.456 17:07:52 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:09.456 00:04:09.456 17:07:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:09.456 INFO: shutting down applications... 00:04:09.456 17:07:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:09.456 17:07:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:09.456 17:07:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:09.456 17:07:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57734 ]] 00:04:09.456 17:07:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57734 00:04:09.456 17:07:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:09.456 17:07:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:09.456 17:07:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57734 00:04:09.456 17:07:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:10.022 17:07:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:10.022 17:07:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:10.022 17:07:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57734 00:04:10.022 17:07:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:10.593 17:07:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:10.593 17:07:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:10.593 17:07:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57734 00:04:10.593 17:07:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:11.159 17:07:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:11.159 17:07:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:11.159 17:07:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57734 00:04:11.159 17:07:53 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:11.159 17:07:53 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:11.159 17:07:53 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:11.159 SPDK target shutdown done 00:04:11.159 17:07:53 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:11.159 Success 00:04:11.159 17:07:53 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:11.159 00:04:11.159 real 0m2.633s 00:04:11.159 user 0m2.422s 00:04:11.159 sys 0m0.378s 00:04:11.159 ************************************ 00:04:11.159 END TEST json_config_extra_key 00:04:11.159 ************************************ 00:04:11.159 17:07:53 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:11.159 17:07:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:11.159 17:07:53 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:11.159 17:07:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:11.159 17:07:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:11.159 17:07:53 -- common/autotest_common.sh@10 -- # set +x 00:04:11.159 ************************************ 00:04:11.160 START TEST alias_rpc 00:04:11.160 ************************************ 00:04:11.160 17:07:53 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:11.160 * Looking for test storage... 00:04:11.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:11.160 17:07:53 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:11.160 17:07:53 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:11.160 17:07:53 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:11.160 17:07:54 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:11.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.160 17:07:54 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:11.160 17:07:54 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.160 17:07:54 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:11.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.160 --rc genhtml_branch_coverage=1 00:04:11.160 --rc genhtml_function_coverage=1 00:04:11.160 --rc genhtml_legend=1 00:04:11.160 --rc geninfo_all_blocks=1 00:04:11.160 --rc geninfo_unexecuted_blocks=1 00:04:11.160 00:04:11.160 ' 00:04:11.160 17:07:54 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:11.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.160 --rc genhtml_branch_coverage=1 00:04:11.160 --rc genhtml_function_coverage=1 00:04:11.160 --rc genhtml_legend=1 00:04:11.160 --rc geninfo_all_blocks=1 00:04:11.160 --rc geninfo_unexecuted_blocks=1 00:04:11.160 00:04:11.160 ' 00:04:11.160 17:07:54 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:11.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.160 --rc genhtml_branch_coverage=1 00:04:11.160 --rc genhtml_function_coverage=1 00:04:11.160 --rc genhtml_legend=1 00:04:11.160 --rc geninfo_all_blocks=1 00:04:11.160 --rc geninfo_unexecuted_blocks=1 00:04:11.160 00:04:11.160 ' 00:04:11.160 17:07:54 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:11.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.160 --rc genhtml_branch_coverage=1 00:04:11.160 --rc genhtml_function_coverage=1 00:04:11.160 --rc genhtml_legend=1 00:04:11.160 --rc geninfo_all_blocks=1 00:04:11.160 --rc geninfo_unexecuted_blocks=1 00:04:11.160 00:04:11.160 ' 00:04:11.160 17:07:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:11.160 17:07:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57821 00:04:11.160 17:07:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57821 00:04:11.160 17:07:54 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57821 ']' 00:04:11.160 17:07:54 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.160 17:07:54 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:11.160 17:07:54 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.160 17:07:54 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:11.160 17:07:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:11.160 17:07:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.160 [2024-10-30 17:07:54.095800] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:04:11.160 [2024-10-30 17:07:54.096081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57821 ] 00:04:11.420 [2024-10-30 17:07:54.250228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.420 [2024-10-30 17:07:54.328874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.992 17:07:54 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:11.992 17:07:54 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:11.992 17:07:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:12.253 17:07:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57821 00:04:12.253 17:07:55 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57821 ']' 00:04:12.253 17:07:55 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57821 00:04:12.253 17:07:55 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:12.253 17:07:55 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:12.253 17:07:55 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57821 00:04:12.253 17:07:55 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:12.253 17:07:55 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:12.253 17:07:55 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57821' 00:04:12.253 killing process with pid 57821 00:04:12.253 17:07:55 alias_rpc -- common/autotest_common.sh@971 -- # kill 57821 00:04:12.253 17:07:55 alias_rpc -- common/autotest_common.sh@976 -- # wait 57821 00:04:13.636 ************************************ 00:04:13.636 END TEST alias_rpc 00:04:13.636 ************************************ 00:04:13.636 00:04:13.636 real 0m2.394s 00:04:13.636 user 0m2.469s 00:04:13.636 sys 0m0.384s 00:04:13.636 17:07:56 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:13.636 17:07:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.636 17:07:56 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:13.636 17:07:56 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:13.636 17:07:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:13.636 17:07:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:13.636 17:07:56 -- common/autotest_common.sh@10 -- # set +x 00:04:13.636 ************************************ 00:04:13.636 START TEST spdkcli_tcp 00:04:13.636 ************************************ 00:04:13.636 17:07:56 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:13.636 * Looking for test storage... 00:04:13.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:13.636 17:07:56 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:13.636 17:07:56 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:13.636 17:07:56 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:13.636 17:07:56 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.636 17:07:56 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:13.636 17:07:56 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.636 17:07:56 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:13.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.636 --rc genhtml_branch_coverage=1 00:04:13.636 --rc genhtml_function_coverage=1 00:04:13.636 --rc genhtml_legend=1 00:04:13.636 --rc geninfo_all_blocks=1 00:04:13.636 --rc geninfo_unexecuted_blocks=1 00:04:13.636 00:04:13.636 ' 00:04:13.636 17:07:56 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:13.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.636 --rc genhtml_branch_coverage=1 00:04:13.636 --rc genhtml_function_coverage=1 00:04:13.636 --rc genhtml_legend=1 00:04:13.636 --rc geninfo_all_blocks=1 00:04:13.636 --rc geninfo_unexecuted_blocks=1 00:04:13.636 00:04:13.636 ' 00:04:13.636 17:07:56 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:13.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.636 --rc genhtml_branch_coverage=1 00:04:13.636 --rc genhtml_function_coverage=1 00:04:13.636 --rc genhtml_legend=1 00:04:13.636 --rc geninfo_all_blocks=1 00:04:13.636 --rc geninfo_unexecuted_blocks=1 00:04:13.636 00:04:13.636 ' 00:04:13.636 17:07:56 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:13.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.636 --rc genhtml_branch_coverage=1 00:04:13.636 --rc genhtml_function_coverage=1 00:04:13.636 --rc genhtml_legend=1 00:04:13.636 --rc geninfo_all_blocks=1 00:04:13.636 --rc geninfo_unexecuted_blocks=1 00:04:13.636 00:04:13.636 ' 00:04:13.636 17:07:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:13.636 17:07:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:13.636 17:07:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:13.636 17:07:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:13.636 17:07:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:13.636 17:07:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:13.636 17:07:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:13.636 17:07:56 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:13.636 17:07:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:13.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.636 17:07:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57911 00:04:13.636 17:07:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57911 00:04:13.636 17:07:56 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57911 ']' 00:04:13.636 17:07:56 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.636 17:07:56 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:13.636 17:07:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:13.637 17:07:56 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.637 17:07:56 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:13.637 17:07:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:13.637 [2024-10-30 17:07:56.531881] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:04:13.637 [2024-10-30 17:07:56.532151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57911 ] 00:04:13.896 [2024-10-30 17:07:56.678130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:13.896 [2024-10-30 17:07:56.756505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.896 [2024-10-30 17:07:56.756573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.461 17:07:57 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:14.461 17:07:57 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:14.461 17:07:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:14.461 17:07:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57928 00:04:14.461 17:07:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:14.720 [ 00:04:14.720 "bdev_malloc_delete", 00:04:14.720 "bdev_malloc_create", 00:04:14.720 "bdev_null_resize", 00:04:14.720 "bdev_null_delete", 00:04:14.720 "bdev_null_create", 00:04:14.720 "bdev_nvme_cuse_unregister", 00:04:14.720 "bdev_nvme_cuse_register", 00:04:14.720 "bdev_opal_new_user", 00:04:14.720 "bdev_opal_set_lock_state", 00:04:14.720 "bdev_opal_delete", 00:04:14.720 "bdev_opal_get_info", 00:04:14.720 "bdev_opal_create", 00:04:14.720 "bdev_nvme_opal_revert", 00:04:14.720 "bdev_nvme_opal_init", 00:04:14.720 "bdev_nvme_send_cmd", 00:04:14.720 "bdev_nvme_set_keys", 00:04:14.720 "bdev_nvme_get_path_iostat", 00:04:14.720 "bdev_nvme_get_mdns_discovery_info", 00:04:14.720 "bdev_nvme_stop_mdns_discovery", 00:04:14.720 "bdev_nvme_start_mdns_discovery", 00:04:14.720 "bdev_nvme_set_multipath_policy", 00:04:14.720 "bdev_nvme_set_preferred_path", 00:04:14.720 "bdev_nvme_get_io_paths", 00:04:14.720 "bdev_nvme_remove_error_injection", 00:04:14.720 "bdev_nvme_add_error_injection", 00:04:14.720 "bdev_nvme_get_discovery_info", 00:04:14.720 "bdev_nvme_stop_discovery", 00:04:14.720 "bdev_nvme_start_discovery", 00:04:14.720 "bdev_nvme_get_controller_health_info", 00:04:14.720 "bdev_nvme_disable_controller", 00:04:14.720 "bdev_nvme_enable_controller", 00:04:14.720 "bdev_nvme_reset_controller", 00:04:14.720 "bdev_nvme_get_transport_statistics", 00:04:14.720 "bdev_nvme_apply_firmware", 00:04:14.720 "bdev_nvme_detach_controller", 00:04:14.720 "bdev_nvme_get_controllers", 00:04:14.720 "bdev_nvme_attach_controller", 00:04:14.720 "bdev_nvme_set_hotplug", 00:04:14.720 "bdev_nvme_set_options", 00:04:14.720 "bdev_passthru_delete", 00:04:14.720 "bdev_passthru_create", 00:04:14.720 "bdev_lvol_set_parent_bdev", 00:04:14.720 "bdev_lvol_set_parent", 00:04:14.720 "bdev_lvol_check_shallow_copy", 00:04:14.720 "bdev_lvol_start_shallow_copy", 00:04:14.720 "bdev_lvol_grow_lvstore", 00:04:14.720 "bdev_lvol_get_lvols", 00:04:14.720 "bdev_lvol_get_lvstores", 00:04:14.720 "bdev_lvol_delete", 00:04:14.720 "bdev_lvol_set_read_only", 00:04:14.720 "bdev_lvol_resize", 00:04:14.720 "bdev_lvol_decouple_parent", 00:04:14.720 "bdev_lvol_inflate", 00:04:14.720 "bdev_lvol_rename", 00:04:14.720 "bdev_lvol_clone_bdev", 00:04:14.720 "bdev_lvol_clone", 00:04:14.720 "bdev_lvol_snapshot", 00:04:14.720 "bdev_lvol_create", 00:04:14.720 "bdev_lvol_delete_lvstore", 00:04:14.720 "bdev_lvol_rename_lvstore", 00:04:14.720 "bdev_lvol_create_lvstore", 00:04:14.720 "bdev_raid_set_options", 00:04:14.720 "bdev_raid_remove_base_bdev", 00:04:14.720 "bdev_raid_add_base_bdev", 00:04:14.720 "bdev_raid_delete", 00:04:14.720 "bdev_raid_create", 00:04:14.720 "bdev_raid_get_bdevs", 00:04:14.720 "bdev_error_inject_error", 00:04:14.720 "bdev_error_delete", 00:04:14.720 "bdev_error_create", 00:04:14.720 "bdev_split_delete", 00:04:14.720 "bdev_split_create", 00:04:14.720 "bdev_delay_delete", 00:04:14.720 "bdev_delay_create", 00:04:14.720 "bdev_delay_update_latency", 00:04:14.720 "bdev_zone_block_delete", 00:04:14.720 "bdev_zone_block_create", 00:04:14.720 "blobfs_create", 00:04:14.720 "blobfs_detect", 00:04:14.720 "blobfs_set_cache_size", 00:04:14.720 "bdev_xnvme_delete", 00:04:14.720 "bdev_xnvme_create", 00:04:14.720 "bdev_aio_delete", 00:04:14.720 "bdev_aio_rescan", 00:04:14.720 "bdev_aio_create", 00:04:14.720 "bdev_ftl_set_property", 00:04:14.720 "bdev_ftl_get_properties", 00:04:14.720 "bdev_ftl_get_stats", 00:04:14.720 "bdev_ftl_unmap", 00:04:14.720 "bdev_ftl_unload", 00:04:14.720 "bdev_ftl_delete", 00:04:14.720 "bdev_ftl_load", 00:04:14.720 "bdev_ftl_create", 00:04:14.720 "bdev_virtio_attach_controller", 00:04:14.720 "bdev_virtio_scsi_get_devices", 00:04:14.720 "bdev_virtio_detach_controller", 00:04:14.720 "bdev_virtio_blk_set_hotplug", 00:04:14.720 "bdev_iscsi_delete", 00:04:14.720 "bdev_iscsi_create", 00:04:14.720 "bdev_iscsi_set_options", 00:04:14.720 "accel_error_inject_error", 00:04:14.720 "ioat_scan_accel_module", 00:04:14.720 "dsa_scan_accel_module", 00:04:14.720 "iaa_scan_accel_module", 00:04:14.720 "keyring_file_remove_key", 00:04:14.720 "keyring_file_add_key", 00:04:14.720 "keyring_linux_set_options", 00:04:14.720 "fsdev_aio_delete", 00:04:14.720 "fsdev_aio_create", 00:04:14.720 "iscsi_get_histogram", 00:04:14.720 "iscsi_enable_histogram", 00:04:14.720 "iscsi_set_options", 00:04:14.720 "iscsi_get_auth_groups", 00:04:14.720 "iscsi_auth_group_remove_secret", 00:04:14.720 "iscsi_auth_group_add_secret", 00:04:14.720 "iscsi_delete_auth_group", 00:04:14.720 "iscsi_create_auth_group", 00:04:14.720 "iscsi_set_discovery_auth", 00:04:14.720 "iscsi_get_options", 00:04:14.720 "iscsi_target_node_request_logout", 00:04:14.720 "iscsi_target_node_set_redirect", 00:04:14.720 "iscsi_target_node_set_auth", 00:04:14.720 "iscsi_target_node_add_lun", 00:04:14.720 "iscsi_get_stats", 00:04:14.720 "iscsi_get_connections", 00:04:14.720 "iscsi_portal_group_set_auth", 00:04:14.720 "iscsi_start_portal_group", 00:04:14.720 "iscsi_delete_portal_group", 00:04:14.720 "iscsi_create_portal_group", 00:04:14.720 "iscsi_get_portal_groups", 00:04:14.720 "iscsi_delete_target_node", 00:04:14.720 "iscsi_target_node_remove_pg_ig_maps", 00:04:14.720 "iscsi_target_node_add_pg_ig_maps", 00:04:14.720 "iscsi_create_target_node", 00:04:14.720 "iscsi_get_target_nodes", 00:04:14.720 "iscsi_delete_initiator_group", 00:04:14.720 "iscsi_initiator_group_remove_initiators", 00:04:14.720 "iscsi_initiator_group_add_initiators", 00:04:14.720 "iscsi_create_initiator_group", 00:04:14.720 "iscsi_get_initiator_groups", 00:04:14.720 "nvmf_set_crdt", 00:04:14.720 "nvmf_set_config", 00:04:14.720 "nvmf_set_max_subsystems", 00:04:14.720 "nvmf_stop_mdns_prr", 00:04:14.720 "nvmf_publish_mdns_prr", 00:04:14.720 "nvmf_subsystem_get_listeners", 00:04:14.720 "nvmf_subsystem_get_qpairs", 00:04:14.720 "nvmf_subsystem_get_controllers", 00:04:14.720 "nvmf_get_stats", 00:04:14.720 "nvmf_get_transports", 00:04:14.720 "nvmf_create_transport", 00:04:14.720 "nvmf_get_targets", 00:04:14.720 "nvmf_delete_target", 00:04:14.720 "nvmf_create_target", 00:04:14.720 "nvmf_subsystem_allow_any_host", 00:04:14.720 "nvmf_subsystem_set_keys", 00:04:14.720 "nvmf_subsystem_remove_host", 00:04:14.720 "nvmf_subsystem_add_host", 00:04:14.720 "nvmf_ns_remove_host", 00:04:14.720 "nvmf_ns_add_host", 00:04:14.720 "nvmf_subsystem_remove_ns", 00:04:14.720 "nvmf_subsystem_set_ns_ana_group", 00:04:14.720 "nvmf_subsystem_add_ns", 00:04:14.720 "nvmf_subsystem_listener_set_ana_state", 00:04:14.720 "nvmf_discovery_get_referrals", 00:04:14.720 "nvmf_discovery_remove_referral", 00:04:14.720 "nvmf_discovery_add_referral", 00:04:14.720 "nvmf_subsystem_remove_listener", 00:04:14.720 "nvmf_subsystem_add_listener", 00:04:14.720 "nvmf_delete_subsystem", 00:04:14.720 "nvmf_create_subsystem", 00:04:14.720 "nvmf_get_subsystems", 00:04:14.720 "env_dpdk_get_mem_stats", 00:04:14.720 "nbd_get_disks", 00:04:14.720 "nbd_stop_disk", 00:04:14.720 "nbd_start_disk", 00:04:14.720 "ublk_recover_disk", 00:04:14.720 "ublk_get_disks", 00:04:14.720 "ublk_stop_disk", 00:04:14.720 "ublk_start_disk", 00:04:14.720 "ublk_destroy_target", 00:04:14.720 "ublk_create_target", 00:04:14.720 "virtio_blk_create_transport", 00:04:14.720 "virtio_blk_get_transports", 00:04:14.720 "vhost_controller_set_coalescing", 00:04:14.720 "vhost_get_controllers", 00:04:14.720 "vhost_delete_controller", 00:04:14.720 "vhost_create_blk_controller", 00:04:14.720 "vhost_scsi_controller_remove_target", 00:04:14.720 "vhost_scsi_controller_add_target", 00:04:14.720 "vhost_start_scsi_controller", 00:04:14.720 "vhost_create_scsi_controller", 00:04:14.720 "thread_set_cpumask", 00:04:14.720 "scheduler_set_options", 00:04:14.720 "framework_get_governor", 00:04:14.720 "framework_get_scheduler", 00:04:14.720 "framework_set_scheduler", 00:04:14.720 "framework_get_reactors", 00:04:14.720 "thread_get_io_channels", 00:04:14.720 "thread_get_pollers", 00:04:14.720 "thread_get_stats", 00:04:14.720 "framework_monitor_context_switch", 00:04:14.720 "spdk_kill_instance", 00:04:14.720 "log_enable_timestamps", 00:04:14.720 "log_get_flags", 00:04:14.720 "log_clear_flag", 00:04:14.720 "log_set_flag", 00:04:14.720 "log_get_level", 00:04:14.720 "log_set_level", 00:04:14.720 "log_get_print_level", 00:04:14.720 "log_set_print_level", 00:04:14.720 "framework_enable_cpumask_locks", 00:04:14.720 "framework_disable_cpumask_locks", 00:04:14.720 "framework_wait_init", 00:04:14.720 "framework_start_init", 00:04:14.720 "scsi_get_devices", 00:04:14.720 "bdev_get_histogram", 00:04:14.720 "bdev_enable_histogram", 00:04:14.720 "bdev_set_qos_limit", 00:04:14.720 "bdev_set_qd_sampling_period", 00:04:14.720 "bdev_get_bdevs", 00:04:14.720 "bdev_reset_iostat", 00:04:14.721 "bdev_get_iostat", 00:04:14.721 "bdev_examine", 00:04:14.721 "bdev_wait_for_examine", 00:04:14.721 "bdev_set_options", 00:04:14.721 "accel_get_stats", 00:04:14.721 "accel_set_options", 00:04:14.721 "accel_set_driver", 00:04:14.721 "accel_crypto_key_destroy", 00:04:14.721 "accel_crypto_keys_get", 00:04:14.721 "accel_crypto_key_create", 00:04:14.721 "accel_assign_opc", 00:04:14.721 "accel_get_module_info", 00:04:14.721 "accel_get_opc_assignments", 00:04:14.721 "vmd_rescan", 00:04:14.721 "vmd_remove_device", 00:04:14.721 "vmd_enable", 00:04:14.721 "sock_get_default_impl", 00:04:14.721 "sock_set_default_impl", 00:04:14.721 "sock_impl_set_options", 00:04:14.721 "sock_impl_get_options", 00:04:14.721 "iobuf_get_stats", 00:04:14.721 "iobuf_set_options", 00:04:14.721 "keyring_get_keys", 00:04:14.721 "framework_get_pci_devices", 00:04:14.721 "framework_get_config", 00:04:14.721 "framework_get_subsystems", 00:04:14.721 "fsdev_set_opts", 00:04:14.721 "fsdev_get_opts", 00:04:14.721 "trace_get_info", 00:04:14.721 "trace_get_tpoint_group_mask", 00:04:14.721 "trace_disable_tpoint_group", 00:04:14.721 "trace_enable_tpoint_group", 00:04:14.721 "trace_clear_tpoint_mask", 00:04:14.721 "trace_set_tpoint_mask", 00:04:14.721 "notify_get_notifications", 00:04:14.721 "notify_get_types", 00:04:14.721 "spdk_get_version", 00:04:14.721 "rpc_get_methods" 00:04:14.721 ] 00:04:14.721 17:07:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:14.721 17:07:57 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:14.721 17:07:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:14.721 17:07:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:14.721 17:07:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57911 00:04:14.721 17:07:57 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57911 ']' 00:04:14.721 17:07:57 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57911 00:04:14.721 17:07:57 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:14.721 17:07:57 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:14.721 17:07:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57911 00:04:14.721 killing process with pid 57911 00:04:14.721 17:07:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:14.721 17:07:57 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:14.721 17:07:57 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57911' 00:04:14.721 17:07:57 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57911 00:04:14.721 17:07:57 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57911 00:04:16.095 ************************************ 00:04:16.095 END TEST spdkcli_tcp 00:04:16.095 ************************************ 00:04:16.095 00:04:16.095 real 0m2.416s 00:04:16.095 user 0m4.322s 00:04:16.095 sys 0m0.385s 00:04:16.095 17:07:58 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:16.095 17:07:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:16.095 17:07:58 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:16.095 17:07:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:16.095 17:07:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:16.095 17:07:58 -- common/autotest_common.sh@10 -- # set +x 00:04:16.095 ************************************ 00:04:16.095 START TEST dpdk_mem_utility 00:04:16.095 ************************************ 00:04:16.095 17:07:58 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:16.095 * Looking for test storage... 00:04:16.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:16.095 17:07:58 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:16.095 17:07:58 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:16.095 17:07:58 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:16.095 17:07:58 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.095 17:07:58 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:16.095 17:07:58 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.095 17:07:58 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:16.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.095 --rc genhtml_branch_coverage=1 00:04:16.095 --rc genhtml_function_coverage=1 00:04:16.095 --rc genhtml_legend=1 00:04:16.095 --rc geninfo_all_blocks=1 00:04:16.096 --rc geninfo_unexecuted_blocks=1 00:04:16.096 00:04:16.096 ' 00:04:16.096 17:07:58 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:16.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.096 --rc genhtml_branch_coverage=1 00:04:16.096 --rc genhtml_function_coverage=1 00:04:16.096 --rc genhtml_legend=1 00:04:16.096 --rc geninfo_all_blocks=1 00:04:16.096 --rc geninfo_unexecuted_blocks=1 00:04:16.096 00:04:16.096 ' 00:04:16.096 17:07:58 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:16.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.096 --rc genhtml_branch_coverage=1 00:04:16.096 --rc genhtml_function_coverage=1 00:04:16.096 --rc genhtml_legend=1 00:04:16.096 --rc geninfo_all_blocks=1 00:04:16.096 --rc geninfo_unexecuted_blocks=1 00:04:16.096 00:04:16.096 ' 00:04:16.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.096 17:07:58 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:16.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.096 --rc genhtml_branch_coverage=1 00:04:16.096 --rc genhtml_function_coverage=1 00:04:16.096 --rc genhtml_legend=1 00:04:16.096 --rc geninfo_all_blocks=1 00:04:16.096 --rc geninfo_unexecuted_blocks=1 00:04:16.096 00:04:16.096 ' 00:04:16.096 17:07:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:16.096 17:07:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58017 00:04:16.096 17:07:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58017 00:04:16.096 17:07:58 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58017 ']' 00:04:16.096 17:07:58 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.096 17:07:58 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:16.096 17:07:58 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.096 17:07:58 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:16.096 17:07:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:16.096 17:07:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:16.096 [2024-10-30 17:07:58.961631] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:04:16.096 [2024-10-30 17:07:58.961868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58017 ] 00:04:16.354 [2024-10-30 17:07:59.111594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.354 [2024-10-30 17:07:59.187379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.918 17:07:59 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:16.918 17:07:59 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:16.918 17:07:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:16.918 17:07:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:16.918 17:07:59 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.918 17:07:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:16.918 { 00:04:16.918 "filename": "/tmp/spdk_mem_dump.txt" 00:04:16.918 } 00:04:16.918 17:07:59 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.918 17:07:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:16.918 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:16.918 1 heaps totaling size 816.000000 MiB 00:04:16.918 size: 816.000000 MiB heap id: 0 00:04:16.918 end heaps---------- 00:04:16.918 9 mempools totaling size 595.772034 MiB 00:04:16.918 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:16.918 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:16.918 size: 92.545471 MiB name: bdev_io_58017 00:04:16.918 size: 50.003479 MiB name: msgpool_58017 00:04:16.918 size: 36.509338 MiB name: fsdev_io_58017 00:04:16.918 size: 21.763794 MiB name: PDU_Pool 00:04:16.918 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:16.918 size: 4.133484 MiB name: evtpool_58017 00:04:16.918 size: 0.026123 MiB name: Session_Pool 00:04:16.918 end mempools------- 00:04:16.918 6 memzones totaling size 4.142822 MiB 00:04:16.918 size: 1.000366 MiB name: RG_ring_0_58017 00:04:16.918 size: 1.000366 MiB name: RG_ring_1_58017 00:04:16.918 size: 1.000366 MiB name: RG_ring_4_58017 00:04:16.918 size: 1.000366 MiB name: RG_ring_5_58017 00:04:16.918 size: 0.125366 MiB name: RG_ring_2_58017 00:04:16.918 size: 0.015991 MiB name: RG_ring_3_58017 00:04:16.918 end memzones------- 00:04:16.918 17:07:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:16.918 heap id: 0 total size: 816.000000 MiB number of busy elements: 316 number of free elements: 18 00:04:16.918 list of free elements. size: 16.791138 MiB 00:04:16.918 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:16.918 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:16.918 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:16.918 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:16.918 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:16.918 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:16.918 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:16.918 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:16.918 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:16.918 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:16.918 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:16.918 element at address: 0x20001ac00000 with size: 0.559265 MiB 00:04:16.918 element at address: 0x200000c00000 with size: 0.492126 MiB 00:04:16.918 element at address: 0x200018e00000 with size: 0.488464 MiB 00:04:16.918 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:16.918 element at address: 0x200012c00000 with size: 0.443237 MiB 00:04:16.918 element at address: 0x200028000000 with size: 0.390442 MiB 00:04:16.918 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:16.918 list of standard malloc elements. size: 199.287964 MiB 00:04:16.918 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:16.918 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:16.918 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:16.918 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:16.918 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:16.918 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:16.918 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:16.918 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:16.918 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:16.918 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:16.918 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:16.918 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:16.918 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:16.918 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:16.918 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:16.918 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:16.918 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:16.918 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:16.918 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:16.918 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:16.918 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:16.918 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:16.918 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:16.918 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:16.918 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:16.918 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:16.918 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:16.918 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:16.918 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:16.919 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012c71780 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:16.919 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:16.919 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac8f2c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200028063f40 with size: 0.000244 MiB 00:04:16.919 element at address: 0x200028064040 with size: 0.000244 MiB 00:04:16.919 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806af80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806b080 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806b180 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806b280 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806b380 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:16.920 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:16.920 list of memzone associated elements. size: 599.920898 MiB 00:04:16.920 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:16.920 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:16.920 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:16.920 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:16.920 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:16.920 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58017_0 00:04:16.920 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:16.920 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58017_0 00:04:16.920 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:16.920 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58017_0 00:04:16.920 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:16.920 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:16.920 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:16.920 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:16.920 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:16.920 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58017_0 00:04:16.920 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:16.920 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58017 00:04:16.920 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:16.920 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58017 00:04:16.920 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:16.920 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:16.920 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:16.920 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:16.920 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:16.920 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:16.920 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:16.920 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:16.920 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:16.920 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58017 00:04:16.920 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:16.920 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58017 00:04:16.920 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:16.920 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58017 00:04:16.920 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:16.920 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58017 00:04:16.920 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:16.920 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58017 00:04:16.920 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:16.920 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58017 00:04:16.920 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:16.920 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:16.920 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:16.920 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:16.920 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:16.920 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:16.920 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:16.920 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58017 00:04:16.920 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:16.920 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58017 00:04:16.920 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:16.920 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:16.920 element at address: 0x200028064140 with size: 0.023804 MiB 00:04:16.920 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:16.920 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:16.920 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58017 00:04:16.920 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:04:16.920 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:16.920 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:16.920 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58017 00:04:16.920 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:16.920 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58017 00:04:16.920 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:16.920 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58017 00:04:16.920 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:04:16.920 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:16.920 17:07:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:16.920 17:07:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58017 00:04:16.920 17:07:59 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58017 ']' 00:04:16.920 17:07:59 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58017 00:04:16.920 17:07:59 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:16.920 17:07:59 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:16.920 17:07:59 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58017 00:04:16.920 17:07:59 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:16.920 17:07:59 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:16.920 17:07:59 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58017' 00:04:16.920 killing process with pid 58017 00:04:16.920 17:07:59 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58017 00:04:16.920 17:07:59 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58017 00:04:18.321 00:04:18.321 real 0m2.238s 00:04:18.321 user 0m2.185s 00:04:18.321 sys 0m0.364s 00:04:18.321 17:08:01 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:18.321 17:08:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:18.321 ************************************ 00:04:18.321 END TEST dpdk_mem_utility 00:04:18.321 ************************************ 00:04:18.321 17:08:01 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:18.321 17:08:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:18.321 17:08:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:18.321 17:08:01 -- common/autotest_common.sh@10 -- # set +x 00:04:18.322 ************************************ 00:04:18.322 START TEST event 00:04:18.322 ************************************ 00:04:18.322 17:08:01 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:18.322 * Looking for test storage... 00:04:18.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:18.322 17:08:01 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:18.322 17:08:01 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:18.322 17:08:01 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:18.322 17:08:01 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:18.322 17:08:01 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.322 17:08:01 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.322 17:08:01 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.322 17:08:01 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.322 17:08:01 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.322 17:08:01 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.322 17:08:01 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.322 17:08:01 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.322 17:08:01 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.322 17:08:01 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.322 17:08:01 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.322 17:08:01 event -- scripts/common.sh@344 -- # case "$op" in 00:04:18.322 17:08:01 event -- scripts/common.sh@345 -- # : 1 00:04:18.322 17:08:01 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.322 17:08:01 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.322 17:08:01 event -- scripts/common.sh@365 -- # decimal 1 00:04:18.322 17:08:01 event -- scripts/common.sh@353 -- # local d=1 00:04:18.322 17:08:01 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.322 17:08:01 event -- scripts/common.sh@355 -- # echo 1 00:04:18.322 17:08:01 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.322 17:08:01 event -- scripts/common.sh@366 -- # decimal 2 00:04:18.322 17:08:01 event -- scripts/common.sh@353 -- # local d=2 00:04:18.322 17:08:01 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.322 17:08:01 event -- scripts/common.sh@355 -- # echo 2 00:04:18.322 17:08:01 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.322 17:08:01 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.322 17:08:01 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.322 17:08:01 event -- scripts/common.sh@368 -- # return 0 00:04:18.322 17:08:01 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.322 17:08:01 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:18.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.322 --rc genhtml_branch_coverage=1 00:04:18.322 --rc genhtml_function_coverage=1 00:04:18.322 --rc genhtml_legend=1 00:04:18.322 --rc geninfo_all_blocks=1 00:04:18.322 --rc geninfo_unexecuted_blocks=1 00:04:18.322 00:04:18.322 ' 00:04:18.322 17:08:01 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:18.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.322 --rc genhtml_branch_coverage=1 00:04:18.322 --rc genhtml_function_coverage=1 00:04:18.322 --rc genhtml_legend=1 00:04:18.322 --rc geninfo_all_blocks=1 00:04:18.322 --rc geninfo_unexecuted_blocks=1 00:04:18.322 00:04:18.322 ' 00:04:18.322 17:08:01 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:18.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.322 --rc genhtml_branch_coverage=1 00:04:18.322 --rc genhtml_function_coverage=1 00:04:18.322 --rc genhtml_legend=1 00:04:18.322 --rc geninfo_all_blocks=1 00:04:18.322 --rc geninfo_unexecuted_blocks=1 00:04:18.322 00:04:18.322 ' 00:04:18.322 17:08:01 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:18.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.322 --rc genhtml_branch_coverage=1 00:04:18.322 --rc genhtml_function_coverage=1 00:04:18.322 --rc genhtml_legend=1 00:04:18.322 --rc geninfo_all_blocks=1 00:04:18.322 --rc geninfo_unexecuted_blocks=1 00:04:18.322 00:04:18.322 ' 00:04:18.322 17:08:01 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:18.322 17:08:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:18.322 17:08:01 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:18.322 17:08:01 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:18.322 17:08:01 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:18.322 17:08:01 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.322 ************************************ 00:04:18.322 START TEST event_perf 00:04:18.322 ************************************ 00:04:18.322 17:08:01 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:18.322 Running I/O for 1 seconds...[2024-10-30 17:08:01.250035] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:04:18.322 [2024-10-30 17:08:01.250178] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58103 ] 00:04:18.584 [2024-10-30 17:08:01.416572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:18.584 [2024-10-30 17:08:01.549012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.584 Running I/O for 1 seconds...[2024-10-30 17:08:01.549369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:18.584 [2024-10-30 17:08:01.549914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:18.584 [2024-10-30 17:08:01.550034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.968 00:04:19.968 lcore 0: 153274 00:04:19.968 lcore 1: 153272 00:04:19.968 lcore 2: 153271 00:04:19.968 lcore 3: 153271 00:04:19.968 done. 00:04:19.968 00:04:19.968 real 0m1.498s 00:04:19.968 user 0m4.280s 00:04:19.968 sys 0m0.094s 00:04:19.968 17:08:02 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:19.968 17:08:02 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:19.968 ************************************ 00:04:19.968 END TEST event_perf 00:04:19.968 ************************************ 00:04:19.968 17:08:02 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:19.968 17:08:02 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:19.968 17:08:02 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:19.968 17:08:02 event -- common/autotest_common.sh@10 -- # set +x 00:04:19.968 ************************************ 00:04:19.968 START TEST event_reactor 00:04:19.968 ************************************ 00:04:19.968 17:08:02 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:19.968 [2024-10-30 17:08:02.803297] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:04:19.968 [2024-10-30 17:08:02.803405] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58142 ] 00:04:20.227 [2024-10-30 17:08:02.962816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.227 [2024-10-30 17:08:03.073078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.601 test_start 00:04:21.601 oneshot 00:04:21.601 tick 100 00:04:21.601 tick 100 00:04:21.601 tick 250 00:04:21.601 tick 100 00:04:21.601 tick 100 00:04:21.601 tick 250 00:04:21.601 tick 100 00:04:21.601 tick 500 00:04:21.601 tick 100 00:04:21.601 tick 100 00:04:21.601 tick 250 00:04:21.601 tick 100 00:04:21.601 tick 100 00:04:21.601 test_end 00:04:21.601 00:04:21.601 real 0m1.451s 00:04:21.601 user 0m1.273s 00:04:21.601 sys 0m0.070s 00:04:21.601 ************************************ 00:04:21.601 END TEST event_reactor 00:04:21.601 ************************************ 00:04:21.601 17:08:04 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:21.601 17:08:04 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:21.601 17:08:04 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:21.601 17:08:04 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:21.601 17:08:04 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:21.601 17:08:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:21.601 ************************************ 00:04:21.601 START TEST event_reactor_perf 00:04:21.601 ************************************ 00:04:21.601 17:08:04 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:21.601 [2024-10-30 17:08:04.313422] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:04:21.601 [2024-10-30 17:08:04.313540] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58179 ] 00:04:21.601 [2024-10-30 17:08:04.472114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.601 [2024-10-30 17:08:04.566963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.974 test_start 00:04:22.974 test_end 00:04:22.974 Performance: 315469 events per second 00:04:22.974 00:04:22.974 real 0m1.429s 00:04:22.974 user 0m1.257s 00:04:22.974 sys 0m0.064s 00:04:22.974 17:08:05 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:22.974 17:08:05 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:22.974 ************************************ 00:04:22.974 END TEST event_reactor_perf 00:04:22.974 ************************************ 00:04:22.974 17:08:05 event -- event/event.sh@49 -- # uname -s 00:04:22.974 17:08:05 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:22.974 17:08:05 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:22.974 17:08:05 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:22.974 17:08:05 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:22.974 17:08:05 event -- common/autotest_common.sh@10 -- # set +x 00:04:22.974 ************************************ 00:04:22.974 START TEST event_scheduler 00:04:22.974 ************************************ 00:04:22.974 17:08:05 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:22.974 * Looking for test storage... 00:04:22.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:22.974 17:08:05 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:22.974 17:08:05 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:22.974 17:08:05 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:22.974 17:08:05 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.974 17:08:05 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:22.974 17:08:05 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.974 17:08:05 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:22.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.974 --rc genhtml_branch_coverage=1 00:04:22.974 --rc genhtml_function_coverage=1 00:04:22.974 --rc genhtml_legend=1 00:04:22.974 --rc geninfo_all_blocks=1 00:04:22.974 --rc geninfo_unexecuted_blocks=1 00:04:22.974 00:04:22.974 ' 00:04:22.974 17:08:05 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:22.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.974 --rc genhtml_branch_coverage=1 00:04:22.974 --rc genhtml_function_coverage=1 00:04:22.974 --rc genhtml_legend=1 00:04:22.974 --rc geninfo_all_blocks=1 00:04:22.974 --rc geninfo_unexecuted_blocks=1 00:04:22.974 00:04:22.974 ' 00:04:22.974 17:08:05 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:22.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.975 --rc genhtml_branch_coverage=1 00:04:22.975 --rc genhtml_function_coverage=1 00:04:22.975 --rc genhtml_legend=1 00:04:22.975 --rc geninfo_all_blocks=1 00:04:22.975 --rc geninfo_unexecuted_blocks=1 00:04:22.975 00:04:22.975 ' 00:04:22.975 17:08:05 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:22.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.975 --rc genhtml_branch_coverage=1 00:04:22.975 --rc genhtml_function_coverage=1 00:04:22.975 --rc genhtml_legend=1 00:04:22.975 --rc geninfo_all_blocks=1 00:04:22.975 --rc geninfo_unexecuted_blocks=1 00:04:22.975 00:04:22.975 ' 00:04:22.975 17:08:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:22.975 17:08:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58254 00:04:22.975 17:08:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.975 17:08:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58254 00:04:22.975 17:08:05 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58254 ']' 00:04:22.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.975 17:08:05 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.975 17:08:05 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:22.975 17:08:05 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.975 17:08:05 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:22.975 17:08:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:22.975 17:08:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:23.233 [2024-10-30 17:08:05.973588] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:04:23.233 [2024-10-30 17:08:05.973709] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58254 ] 00:04:23.233 [2024-10-30 17:08:06.132611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:23.491 [2024-10-30 17:08:06.232184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.491 [2024-10-30 17:08:06.232364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.491 [2024-10-30 17:08:06.232724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:23.491 [2024-10-30 17:08:06.232963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:24.058 17:08:06 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:24.058 17:08:06 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:24.058 17:08:06 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:24.058 17:08:06 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.058 17:08:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:24.058 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:24.058 POWER: Cannot set governor of lcore 0 to userspace 00:04:24.058 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:24.058 POWER: Cannot set governor of lcore 0 to performance 00:04:24.058 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:24.058 POWER: Cannot set governor of lcore 0 to userspace 00:04:24.058 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:24.058 POWER: Cannot set governor of lcore 0 to userspace 00:04:24.058 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:24.058 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:24.058 POWER: Unable to set Power Management Environment for lcore 0 00:04:24.058 [2024-10-30 17:08:06.814663] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:24.058 [2024-10-30 17:08:06.814683] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:24.058 [2024-10-30 17:08:06.814692] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:24.058 [2024-10-30 17:08:06.814709] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:24.058 [2024-10-30 17:08:06.814717] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:24.058 [2024-10-30 17:08:06.814725] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:24.059 17:08:06 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.059 17:08:06 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:24.059 17:08:06 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.059 17:08:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:24.059 [2024-10-30 17:08:07.034439] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:24.059 17:08:07 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.059 17:08:07 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:24.059 17:08:07 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:24.059 17:08:07 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:24.059 17:08:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:24.318 ************************************ 00:04:24.318 START TEST scheduler_create_thread 00:04:24.318 ************************************ 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.318 2 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.318 3 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.318 4 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.318 5 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.318 6 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.318 7 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.318 8 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.318 9 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.318 10 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.318 17:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.694 17:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.694 17:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:25.694 17:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:25.694 17:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.694 17:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.076 17:08:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.076 00:04:27.076 real 0m2.615s 00:04:27.076 user 0m0.016s 00:04:27.076 sys 0m0.006s 00:04:27.076 17:08:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.076 17:08:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.076 ************************************ 00:04:27.076 END TEST scheduler_create_thread 00:04:27.076 ************************************ 00:04:27.076 17:08:09 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:27.076 17:08:09 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58254 00:04:27.076 17:08:09 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58254 ']' 00:04:27.076 17:08:09 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58254 00:04:27.076 17:08:09 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:27.076 17:08:09 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:27.076 17:08:09 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58254 00:04:27.076 17:08:09 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:27.076 17:08:09 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:27.076 17:08:09 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58254' 00:04:27.076 killing process with pid 58254 00:04:27.076 17:08:09 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58254 00:04:27.076 17:08:09 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58254 00:04:27.333 [2024-10-30 17:08:10.144319] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:27.984 00:04:27.984 real 0m4.942s 00:04:27.984 user 0m8.716s 00:04:27.984 sys 0m0.319s 00:04:27.984 17:08:10 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.984 17:08:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:27.984 ************************************ 00:04:27.984 END TEST event_scheduler 00:04:27.984 ************************************ 00:04:27.984 17:08:10 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:27.984 17:08:10 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:27.984 17:08:10 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:27.984 17:08:10 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:27.984 17:08:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:27.984 ************************************ 00:04:27.984 START TEST app_repeat 00:04:27.984 ************************************ 00:04:27.984 17:08:10 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:27.984 17:08:10 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.984 17:08:10 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.984 17:08:10 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:27.984 17:08:10 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:27.984 17:08:10 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:27.984 17:08:10 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:27.984 17:08:10 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:27.984 17:08:10 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58355 00:04:27.984 17:08:10 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.984 Process app_repeat pid: 58355 00:04:27.984 17:08:10 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58355' 00:04:27.984 spdk_app_start Round 0 00:04:27.984 17:08:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:27.984 17:08:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:27.984 17:08:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58355 /var/tmp/spdk-nbd.sock 00:04:27.984 17:08:10 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58355 ']' 00:04:27.984 17:08:10 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:27.984 17:08:10 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:27.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:27.984 17:08:10 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:27.984 17:08:10 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:27.984 17:08:10 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:27.984 17:08:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:27.984 [2024-10-30 17:08:10.815874] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:04:27.984 [2024-10-30 17:08:10.815985] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58355 ] 00:04:28.244 [2024-10-30 17:08:10.971527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:28.244 [2024-10-30 17:08:11.050031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.244 [2024-10-30 17:08:11.050034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.810 17:08:11 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:28.810 17:08:11 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:28.810 17:08:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.068 Malloc0 00:04:29.068 17:08:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.329 Malloc1 00:04:29.329 17:08:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.329 17:08:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.329 17:08:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.329 17:08:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:29.329 17:08:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.329 17:08:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:29.329 17:08:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.329 17:08:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.329 17:08:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.329 17:08:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:29.329 17:08:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.329 17:08:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:29.329 17:08:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:29.329 17:08:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:29.329 17:08:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.329 17:08:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:29.329 /dev/nbd0 00:04:29.329 17:08:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:29.329 17:08:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.588 1+0 records in 00:04:29.588 1+0 records out 00:04:29.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215154 s, 19.0 MB/s 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:29.588 17:08:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.588 17:08:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.588 17:08:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:29.588 /dev/nbd1 00:04:29.588 17:08:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:29.588 17:08:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.588 1+0 records in 00:04:29.588 1+0 records out 00:04:29.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204084 s, 20.1 MB/s 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:29.588 17:08:12 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:29.588 17:08:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.588 17:08:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.588 17:08:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:29.588 17:08:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.588 17:08:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:29.846 17:08:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:29.846 { 00:04:29.847 "nbd_device": "/dev/nbd0", 00:04:29.847 "bdev_name": "Malloc0" 00:04:29.847 }, 00:04:29.847 { 00:04:29.847 "nbd_device": "/dev/nbd1", 00:04:29.847 "bdev_name": "Malloc1" 00:04:29.847 } 00:04:29.847 ]' 00:04:29.847 17:08:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:29.847 { 00:04:29.847 "nbd_device": "/dev/nbd0", 00:04:29.847 "bdev_name": "Malloc0" 00:04:29.847 }, 00:04:29.847 { 00:04:29.847 "nbd_device": "/dev/nbd1", 00:04:29.847 "bdev_name": "Malloc1" 00:04:29.847 } 00:04:29.847 ]' 00:04:29.847 17:08:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:29.847 17:08:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:29.847 /dev/nbd1' 00:04:29.847 17:08:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:29.847 17:08:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:29.847 /dev/nbd1' 00:04:29.847 17:08:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:29.847 17:08:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:29.847 17:08:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:29.847 17:08:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:29.847 17:08:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:29.847 17:08:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.847 17:08:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:29.847 17:08:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:29.847 17:08:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:29.847 17:08:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:29.847 17:08:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:29.847 256+0 records in 00:04:29.847 256+0 records out 00:04:29.847 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00614838 s, 171 MB/s 00:04:29.847 17:08:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:29.847 17:08:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:29.847 256+0 records in 00:04:29.847 256+0 records out 00:04:29.847 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161052 s, 65.1 MB/s 00:04:29.847 17:08:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:29.847 17:08:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:30.106 256+0 records in 00:04:30.106 256+0 records out 00:04:30.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145167 s, 72.2 MB/s 00:04:30.106 17:08:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:30.106 17:08:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.106 17:08:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.106 17:08:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:30.106 17:08:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:30.106 17:08:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:30.106 17:08:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:30.106 17:08:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.106 17:08:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:30.106 17:08:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.106 17:08:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:30.106 17:08:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:30.106 17:08:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:30.106 17:08:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.106 17:08:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.106 17:08:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:30.106 17:08:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:30.106 17:08:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.106 17:08:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:30.106 17:08:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:30.106 17:08:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:30.106 17:08:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:30.106 17:08:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.106 17:08:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.106 17:08:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:30.106 17:08:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:30.106 17:08:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.106 17:08:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.106 17:08:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:30.365 17:08:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:30.365 17:08:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:30.365 17:08:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:30.365 17:08:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.365 17:08:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.365 17:08:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:30.365 17:08:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:30.365 17:08:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.365 17:08:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.365 17:08:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.365 17:08:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.622 17:08:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:30.622 17:08:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:30.622 17:08:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.622 17:08:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:30.623 17:08:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.623 17:08:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:30.623 17:08:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:30.623 17:08:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:30.623 17:08:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:30.623 17:08:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:30.623 17:08:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:30.623 17:08:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:30.623 17:08:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:30.880 17:08:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:31.443 [2024-10-30 17:08:14.309189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.443 [2024-10-30 17:08:14.377986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.443 [2024-10-30 17:08:14.378086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.700 [2024-10-30 17:08:14.474644] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:31.700 [2024-10-30 17:08:14.474708] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:34.228 17:08:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:34.228 spdk_app_start Round 1 00:04:34.228 17:08:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:34.228 17:08:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58355 /var/tmp/spdk-nbd.sock 00:04:34.228 17:08:16 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58355 ']' 00:04:34.228 17:08:16 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:34.228 17:08:16 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:34.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:34.228 17:08:16 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:34.228 17:08:16 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:34.228 17:08:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:34.228 17:08:16 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:34.228 17:08:16 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:34.228 17:08:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.228 Malloc0 00:04:34.228 17:08:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.487 Malloc1 00:04:34.487 17:08:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:34.487 17:08:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.487 17:08:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.487 17:08:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:34.487 17:08:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.487 17:08:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:34.487 17:08:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:34.487 17:08:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.487 17:08:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.487 17:08:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:34.487 17:08:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.487 17:08:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:34.487 17:08:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:34.487 17:08:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:34.487 17:08:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.487 17:08:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:34.745 /dev/nbd0 00:04:34.745 17:08:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:34.745 17:08:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:34.745 17:08:17 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:34.745 17:08:17 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:34.745 17:08:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:34.745 17:08:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:34.745 17:08:17 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:34.745 17:08:17 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:34.745 17:08:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:34.745 17:08:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:34.745 17:08:17 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.745 1+0 records in 00:04:34.745 1+0 records out 00:04:34.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466347 s, 8.8 MB/s 00:04:34.745 17:08:17 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:34.745 17:08:17 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:34.745 17:08:17 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:34.745 17:08:17 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:34.745 17:08:17 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:34.745 17:08:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.745 17:08:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.745 17:08:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:35.003 /dev/nbd1 00:04:35.003 17:08:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:35.003 17:08:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:35.003 17:08:17 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:35.003 17:08:17 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:35.003 17:08:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:35.003 17:08:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:35.003 17:08:17 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:35.003 17:08:17 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:35.003 17:08:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:35.003 17:08:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:35.003 17:08:17 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:35.003 1+0 records in 00:04:35.003 1+0 records out 00:04:35.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476176 s, 8.6 MB/s 00:04:35.003 17:08:17 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:35.003 17:08:17 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:35.003 17:08:17 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:35.003 17:08:17 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:35.003 17:08:17 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:35.003 17:08:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:35.003 17:08:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.003 17:08:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.003 17:08:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.003 17:08:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.262 17:08:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:35.262 { 00:04:35.262 "nbd_device": "/dev/nbd0", 00:04:35.262 "bdev_name": "Malloc0" 00:04:35.262 }, 00:04:35.262 { 00:04:35.262 "nbd_device": "/dev/nbd1", 00:04:35.262 "bdev_name": "Malloc1" 00:04:35.262 } 00:04:35.262 ]' 00:04:35.262 17:08:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:35.262 17:08:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:35.262 { 00:04:35.262 "nbd_device": "/dev/nbd0", 00:04:35.262 "bdev_name": "Malloc0" 00:04:35.262 }, 00:04:35.262 { 00:04:35.262 "nbd_device": "/dev/nbd1", 00:04:35.262 "bdev_name": "Malloc1" 00:04:35.262 } 00:04:35.262 ]' 00:04:35.262 17:08:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:35.262 /dev/nbd1' 00:04:35.262 17:08:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:35.262 17:08:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:35.262 /dev/nbd1' 00:04:35.262 17:08:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:35.262 17:08:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:35.262 17:08:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:35.262 17:08:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:35.262 17:08:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:35.262 17:08:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:35.263 256+0 records in 00:04:35.263 256+0 records out 00:04:35.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00542476 s, 193 MB/s 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:35.263 256+0 records in 00:04:35.263 256+0 records out 00:04:35.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169519 s, 61.9 MB/s 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:35.263 256+0 records in 00:04:35.263 256+0 records out 00:04:35.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147412 s, 71.1 MB/s 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.263 17:08:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:35.521 17:08:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:35.521 17:08:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:35.521 17:08:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:35.521 17:08:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.521 17:08:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.521 17:08:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:35.521 17:08:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.521 17:08:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.521 17:08:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.521 17:08:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:35.779 17:08:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:35.779 17:08:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:35.779 17:08:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:35.779 17:08:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.779 17:08:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.779 17:08:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:35.779 17:08:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.779 17:08:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.779 17:08:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.779 17:08:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.779 17:08:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.037 17:08:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:36.037 17:08:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:36.037 17:08:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.037 17:08:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:36.037 17:08:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:36.037 17:08:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.037 17:08:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:36.037 17:08:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:36.037 17:08:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:36.037 17:08:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:36.037 17:08:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:36.037 17:08:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:36.037 17:08:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:36.296 17:08:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:36.864 [2024-10-30 17:08:19.685235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.864 [2024-10-30 17:08:19.752103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.864 [2024-10-30 17:08:19.752104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.135 [2024-10-30 17:08:19.853486] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:37.135 [2024-10-30 17:08:19.853534] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:39.676 17:08:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:39.676 17:08:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:39.676 spdk_app_start Round 2 00:04:39.676 17:08:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58355 /var/tmp/spdk-nbd.sock 00:04:39.676 17:08:22 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58355 ']' 00:04:39.676 17:08:22 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:39.676 17:08:22 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:39.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:39.676 17:08:22 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:39.676 17:08:22 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:39.676 17:08:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:39.676 17:08:22 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:39.676 17:08:22 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:39.676 17:08:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.676 Malloc0 00:04:39.676 17:08:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.934 Malloc1 00:04:39.934 17:08:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.934 17:08:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.934 17:08:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.934 17:08:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:39.934 17:08:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.934 17:08:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:39.934 17:08:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.934 17:08:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.934 17:08:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.934 17:08:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:39.934 17:08:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.934 17:08:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:39.934 17:08:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:39.934 17:08:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:39.934 17:08:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.934 17:08:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:40.193 /dev/nbd0 00:04:40.193 17:08:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:40.193 17:08:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:40.193 17:08:23 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:40.193 17:08:23 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:40.193 17:08:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:40.193 17:08:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:40.193 17:08:23 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:40.193 17:08:23 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:40.193 17:08:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:40.193 17:08:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:40.193 17:08:23 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.193 1+0 records in 00:04:40.193 1+0 records out 00:04:40.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173982 s, 23.5 MB/s 00:04:40.193 17:08:23 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.193 17:08:23 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:40.193 17:08:23 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.193 17:08:23 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:40.193 17:08:23 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:40.193 17:08:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.193 17:08:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.193 17:08:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:40.451 /dev/nbd1 00:04:40.451 17:08:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:40.451 17:08:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:40.451 17:08:23 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:40.451 17:08:23 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:40.451 17:08:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:40.451 17:08:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:40.451 17:08:23 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:40.451 17:08:23 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:40.451 17:08:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:40.451 17:08:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:40.451 17:08:23 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.451 1+0 records in 00:04:40.451 1+0 records out 00:04:40.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000712849 s, 5.7 MB/s 00:04:40.451 17:08:23 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.451 17:08:23 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:40.452 17:08:23 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.452 17:08:23 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:40.452 17:08:23 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:40.452 17:08:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.452 17:08:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.452 17:08:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.452 17:08:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.452 17:08:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:40.710 { 00:04:40.710 "nbd_device": "/dev/nbd0", 00:04:40.710 "bdev_name": "Malloc0" 00:04:40.710 }, 00:04:40.710 { 00:04:40.710 "nbd_device": "/dev/nbd1", 00:04:40.710 "bdev_name": "Malloc1" 00:04:40.710 } 00:04:40.710 ]' 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:40.710 { 00:04:40.710 "nbd_device": "/dev/nbd0", 00:04:40.710 "bdev_name": "Malloc0" 00:04:40.710 }, 00:04:40.710 { 00:04:40.710 "nbd_device": "/dev/nbd1", 00:04:40.710 "bdev_name": "Malloc1" 00:04:40.710 } 00:04:40.710 ]' 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:40.710 /dev/nbd1' 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:40.710 /dev/nbd1' 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:40.710 256+0 records in 00:04:40.710 256+0 records out 00:04:40.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00852004 s, 123 MB/s 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:40.710 256+0 records in 00:04:40.710 256+0 records out 00:04:40.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133855 s, 78.3 MB/s 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:40.710 256+0 records in 00:04:40.710 256+0 records out 00:04:40.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156245 s, 67.1 MB/s 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.710 17:08:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:40.968 17:08:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:40.968 17:08:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:40.968 17:08:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:40.968 17:08:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.968 17:08:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.968 17:08:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:40.968 17:08:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.968 17:08:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.968 17:08:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.968 17:08:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:41.226 17:08:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:41.226 17:08:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:41.226 17:08:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:41.226 17:08:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.226 17:08:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.226 17:08:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:41.226 17:08:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.226 17:08:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.226 17:08:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.226 17:08:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.226 17:08:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.226 17:08:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:41.226 17:08:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:41.226 17:08:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.226 17:08:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:41.226 17:08:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:41.226 17:08:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.226 17:08:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:41.226 17:08:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:41.226 17:08:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:41.226 17:08:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:41.226 17:08:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:41.226 17:08:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:41.226 17:08:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:41.792 17:08:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:42.050 [2024-10-30 17:08:25.029085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.309 [2024-10-30 17:08:25.100451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.309 [2024-10-30 17:08:25.100548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.309 [2024-10-30 17:08:25.198002] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:42.309 [2024-10-30 17:08:25.198042] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:44.837 17:08:27 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58355 /var/tmp/spdk-nbd.sock 00:04:44.837 17:08:27 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58355 ']' 00:04:44.837 17:08:27 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.837 17:08:27 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:44.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.837 17:08:27 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.837 17:08:27 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:44.837 17:08:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:44.837 17:08:27 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:44.837 17:08:27 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:44.837 17:08:27 event.app_repeat -- event/event.sh@39 -- # killprocess 58355 00:04:44.837 17:08:27 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58355 ']' 00:04:44.837 17:08:27 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58355 00:04:44.837 17:08:27 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:04:44.837 17:08:27 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:44.837 17:08:27 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58355 00:04:44.837 17:08:27 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:44.837 killing process with pid 58355 00:04:44.837 17:08:27 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:44.837 17:08:27 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58355' 00:04:44.837 17:08:27 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58355 00:04:44.837 17:08:27 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58355 00:04:45.404 spdk_app_start is called in Round 0. 00:04:45.404 Shutdown signal received, stop current app iteration 00:04:45.404 Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 reinitialization... 00:04:45.404 spdk_app_start is called in Round 1. 00:04:45.404 Shutdown signal received, stop current app iteration 00:04:45.404 Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 reinitialization... 00:04:45.404 spdk_app_start is called in Round 2. 00:04:45.404 Shutdown signal received, stop current app iteration 00:04:45.404 Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 reinitialization... 00:04:45.404 spdk_app_start is called in Round 3. 00:04:45.404 Shutdown signal received, stop current app iteration 00:04:45.404 17:08:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:45.404 17:08:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:45.404 00:04:45.404 real 0m17.443s 00:04:45.404 user 0m38.286s 00:04:45.404 sys 0m1.986s 00:04:45.404 17:08:28 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:45.404 17:08:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:45.404 ************************************ 00:04:45.404 END TEST app_repeat 00:04:45.404 ************************************ 00:04:45.404 17:08:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:45.404 17:08:28 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:45.404 17:08:28 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:45.404 17:08:28 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:45.404 17:08:28 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.404 ************************************ 00:04:45.404 START TEST cpu_locks 00:04:45.404 ************************************ 00:04:45.404 17:08:28 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:45.404 * Looking for test storage... 00:04:45.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:45.404 17:08:28 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:45.404 17:08:28 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:45.404 17:08:28 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:45.662 17:08:28 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:45.662 17:08:28 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.662 17:08:28 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.662 17:08:28 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.662 17:08:28 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.662 17:08:28 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.662 17:08:28 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.662 17:08:28 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.663 17:08:28 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:45.663 17:08:28 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.663 17:08:28 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:45.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.663 --rc genhtml_branch_coverage=1 00:04:45.663 --rc genhtml_function_coverage=1 00:04:45.663 --rc genhtml_legend=1 00:04:45.663 --rc geninfo_all_blocks=1 00:04:45.663 --rc geninfo_unexecuted_blocks=1 00:04:45.663 00:04:45.663 ' 00:04:45.663 17:08:28 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:45.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.663 --rc genhtml_branch_coverage=1 00:04:45.663 --rc genhtml_function_coverage=1 00:04:45.663 --rc genhtml_legend=1 00:04:45.663 --rc geninfo_all_blocks=1 00:04:45.663 --rc geninfo_unexecuted_blocks=1 00:04:45.663 00:04:45.663 ' 00:04:45.663 17:08:28 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:45.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.663 --rc genhtml_branch_coverage=1 00:04:45.663 --rc genhtml_function_coverage=1 00:04:45.663 --rc genhtml_legend=1 00:04:45.663 --rc geninfo_all_blocks=1 00:04:45.663 --rc geninfo_unexecuted_blocks=1 00:04:45.663 00:04:45.663 ' 00:04:45.663 17:08:28 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:45.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.663 --rc genhtml_branch_coverage=1 00:04:45.663 --rc genhtml_function_coverage=1 00:04:45.663 --rc genhtml_legend=1 00:04:45.663 --rc geninfo_all_blocks=1 00:04:45.663 --rc geninfo_unexecuted_blocks=1 00:04:45.663 00:04:45.663 ' 00:04:45.663 17:08:28 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:45.663 17:08:28 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:45.663 17:08:28 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:45.663 17:08:28 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:45.663 17:08:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:45.663 17:08:28 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:45.663 17:08:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.663 ************************************ 00:04:45.663 START TEST default_locks 00:04:45.663 ************************************ 00:04:45.663 17:08:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:04:45.663 17:08:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58787 00:04:45.663 17:08:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58787 00:04:45.663 17:08:28 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58787 ']' 00:04:45.663 17:08:28 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.663 17:08:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:45.663 17:08:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:45.663 17:08:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.663 17:08:28 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:45.663 17:08:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.663 [2024-10-30 17:08:28.471755] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:04:45.663 [2024-10-30 17:08:28.472122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58787 ] 00:04:45.663 [2024-10-30 17:08:28.618872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.921 [2024-10-30 17:08:28.695039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.488 17:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:46.488 17:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:04:46.488 17:08:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58787 00:04:46.488 17:08:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58787 00:04:46.488 17:08:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:46.746 17:08:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58787 00:04:46.746 17:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58787 ']' 00:04:46.746 17:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58787 00:04:46.746 17:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:04:46.746 17:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:46.746 17:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58787 00:04:46.746 17:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:46.746 17:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:46.746 killing process with pid 58787 00:04:46.746 17:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58787' 00:04:46.746 17:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58787 00:04:46.746 17:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58787 00:04:48.155 17:08:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58787 00:04:48.155 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:48.155 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58787 00:04:48.155 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:48.155 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.155 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:48.155 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.155 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58787 00:04:48.155 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58787 ']' 00:04:48.155 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.155 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:48.155 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.155 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:48.155 ERROR: process (pid: 58787) is no longer running 00:04:48.155 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.155 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58787) - No such process 00:04:48.155 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:48.155 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:04:48.155 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:48.156 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:48.156 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:48.156 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:48.156 17:08:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:48.156 17:08:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:48.156 17:08:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:48.156 17:08:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:48.156 00:04:48.156 real 0m2.272s 00:04:48.156 user 0m2.311s 00:04:48.156 sys 0m0.394s 00:04:48.156 ************************************ 00:04:48.156 END TEST default_locks 00:04:48.156 ************************************ 00:04:48.156 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:48.156 17:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.156 17:08:30 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:48.156 17:08:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:48.156 17:08:30 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.156 17:08:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.156 ************************************ 00:04:48.156 START TEST default_locks_via_rpc 00:04:48.156 ************************************ 00:04:48.156 17:08:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:04:48.156 17:08:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58840 00:04:48.156 17:08:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58840 00:04:48.156 17:08:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58840 ']' 00:04:48.156 17:08:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.156 17:08:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.156 17:08:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:48.156 17:08:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.156 17:08:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:48.156 17:08:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.156 [2024-10-30 17:08:30.814257] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:04:48.156 [2024-10-30 17:08:30.814365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58840 ] 00:04:48.156 [2024-10-30 17:08:30.965989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.156 [2024-10-30 17:08:31.040532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.723 17:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:48.723 17:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:48.723 17:08:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:48.723 17:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.723 17:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.723 17:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.723 17:08:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:48.723 17:08:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:48.723 17:08:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:48.723 17:08:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:48.723 17:08:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:48.723 17:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.723 17:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.723 17:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.723 17:08:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58840 00:04:48.723 17:08:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58840 00:04:48.723 17:08:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:48.982 17:08:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58840 00:04:48.982 17:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58840 ']' 00:04:48.982 17:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58840 00:04:48.982 17:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:04:48.982 17:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:48.982 17:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58840 00:04:48.982 killing process with pid 58840 00:04:48.982 17:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:48.982 17:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:48.982 17:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58840' 00:04:48.982 17:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58840 00:04:48.982 17:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58840 00:04:50.356 00:04:50.356 real 0m2.210s 00:04:50.356 user 0m2.207s 00:04:50.356 sys 0m0.388s 00:04:50.356 17:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:50.356 17:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.356 ************************************ 00:04:50.356 END TEST default_locks_via_rpc 00:04:50.356 ************************************ 00:04:50.356 17:08:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:50.356 17:08:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:50.356 17:08:32 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:50.356 17:08:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.356 ************************************ 00:04:50.356 START TEST non_locking_app_on_locked_coremask 00:04:50.356 ************************************ 00:04:50.356 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:04:50.356 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58892 00:04:50.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.356 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58892 /var/tmp/spdk.sock 00:04:50.356 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58892 ']' 00:04:50.356 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.356 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:50.356 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.356 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.356 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:50.356 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.356 [2024-10-30 17:08:33.073105] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:04:50.356 [2024-10-30 17:08:33.073362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58892 ] 00:04:50.356 [2024-10-30 17:08:33.216368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.356 [2024-10-30 17:08:33.291154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:50.925 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:50.925 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:50.925 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58908 00:04:50.925 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58908 /var/tmp/spdk2.sock 00:04:50.925 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58908 ']' 00:04:50.925 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:50.925 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:50.925 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:50.925 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:50.925 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:50.925 17:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.186 [2024-10-30 17:08:33.946520] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:04:51.186 [2024-10-30 17:08:33.947127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58908 ] 00:04:51.186 [2024-10-30 17:08:34.110624] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:51.186 [2024-10-30 17:08:34.110660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.447 [2024-10-30 17:08:34.263622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.390 17:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:52.390 17:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:52.390 17:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58892 00:04:52.390 17:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58892 00:04:52.390 17:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.649 17:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58892 00:04:52.649 17:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58892 ']' 00:04:52.649 17:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58892 00:04:52.650 17:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:52.650 17:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:52.650 17:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58892 00:04:52.650 17:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:52.650 17:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:52.650 killing process with pid 58892 00:04:52.650 17:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58892' 00:04:52.650 17:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58892 00:04:52.650 17:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58892 00:04:55.183 17:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58908 00:04:55.183 17:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58908 ']' 00:04:55.183 17:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58908 00:04:55.183 17:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:55.183 17:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:55.183 17:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58908 00:04:55.183 killing process with pid 58908 00:04:55.183 17:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:55.183 17:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:55.183 17:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58908' 00:04:55.183 17:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58908 00:04:55.183 17:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58908 00:04:56.118 00:04:56.118 real 0m5.938s 00:04:56.118 user 0m6.178s 00:04:56.118 sys 0m0.775s 00:04:56.118 ************************************ 00:04:56.118 END TEST non_locking_app_on_locked_coremask 00:04:56.118 ************************************ 00:04:56.118 17:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:56.118 17:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.118 17:08:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:56.118 17:08:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:56.118 17:08:38 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.118 17:08:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.118 ************************************ 00:04:56.118 START TEST locking_app_on_unlocked_coremask 00:04:56.118 ************************************ 00:04:56.118 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:04:56.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.118 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58999 00:04:56.118 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58999 /var/tmp/spdk.sock 00:04:56.118 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58999 ']' 00:04:56.118 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.118 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:56.118 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:56.118 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.118 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:56.118 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.118 [2024-10-30 17:08:39.079562] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:04:56.118 [2024-10-30 17:08:39.079684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58999 ] 00:04:56.378 [2024-10-30 17:08:39.235641] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:56.378 [2024-10-30 17:08:39.235831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.378 [2024-10-30 17:08:39.314710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:56.943 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:56.943 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:56.943 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59015 00:04:56.943 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59015 /var/tmp/spdk2.sock 00:04:56.943 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59015 ']' 00:04:56.943 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:56.943 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:56.943 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:56.943 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:56.943 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:56.943 17:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.202 [2024-10-30 17:08:40.014271] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:04:57.202 [2024-10-30 17:08:40.014669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59015 ] 00:04:57.498 [2024-10-30 17:08:40.193387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.498 [2024-10-30 17:08:40.345686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.442 17:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:58.442 17:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:58.442 17:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59015 00:04:58.442 17:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59015 00:04:58.442 17:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.700 17:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58999 00:04:58.700 17:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58999 ']' 00:04:58.700 17:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58999 00:04:58.700 17:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:58.700 17:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:58.700 17:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58999 00:04:58.700 17:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:58.700 17:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:58.700 killing process with pid 58999 00:04:58.700 17:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58999' 00:04:58.700 17:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58999 00:04:58.700 17:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58999 00:05:01.230 17:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59015 00:05:01.230 17:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59015 ']' 00:05:01.230 17:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59015 00:05:01.230 17:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:01.230 17:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:01.230 17:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59015 00:05:01.230 killing process with pid 59015 00:05:01.230 17:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:01.230 17:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:01.230 17:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59015' 00:05:01.230 17:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59015 00:05:01.230 17:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59015 00:05:02.166 00:05:02.166 real 0m6.082s 00:05:02.166 user 0m6.424s 00:05:02.166 sys 0m0.850s 00:05:02.166 17:08:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:02.166 17:08:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.166 ************************************ 00:05:02.166 END TEST locking_app_on_unlocked_coremask 00:05:02.166 ************************************ 00:05:02.166 17:08:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:02.166 17:08:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:02.166 17:08:45 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:02.166 17:08:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.425 ************************************ 00:05:02.425 START TEST locking_app_on_locked_coremask 00:05:02.425 ************************************ 00:05:02.425 17:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:05:02.425 17:08:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59106 00:05:02.425 17:08:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59106 /var/tmp/spdk.sock 00:05:02.425 17:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59106 ']' 00:05:02.425 17:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.425 17:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:02.425 17:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.425 17:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:02.425 17:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.425 17:08:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.425 [2024-10-30 17:08:45.214050] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:05:02.425 [2024-10-30 17:08:45.214143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59106 ] 00:05:02.425 [2024-10-30 17:08:45.363386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.684 [2024-10-30 17:08:45.439483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.250 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:03.250 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:03.250 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59122 00:05:03.250 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59122 /var/tmp/spdk2.sock 00:05:03.250 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:03.250 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:03.250 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59122 /var/tmp/spdk2.sock 00:05:03.250 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:03.250 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:03.250 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:03.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.250 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:03.250 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59122 /var/tmp/spdk2.sock 00:05:03.250 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59122 ']' 00:05:03.250 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.250 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:03.250 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.250 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:03.250 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.250 [2024-10-30 17:08:46.122502] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:05:03.250 [2024-10-30 17:08:46.122620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59122 ] 00:05:03.507 [2024-10-30 17:08:46.287044] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59106 has claimed it. 00:05:03.507 [2024-10-30 17:08:46.287093] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:04.072 ERROR: process (pid: 59122) is no longer running 00:05:04.072 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59122) - No such process 00:05:04.072 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:04.072 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:04.072 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:04.072 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:04.072 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:04.072 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:04.072 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59106 00:05:04.072 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:04.072 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59106 00:05:04.072 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59106 00:05:04.072 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59106 ']' 00:05:04.072 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59106 00:05:04.072 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:04.072 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:04.072 17:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59106 00:05:04.072 killing process with pid 59106 00:05:04.072 17:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:04.072 17:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:04.072 17:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59106' 00:05:04.072 17:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59106 00:05:04.072 17:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59106 00:05:05.443 00:05:05.443 real 0m2.998s 00:05:05.443 user 0m3.231s 00:05:05.443 sys 0m0.523s 00:05:05.443 17:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:05.443 ************************************ 00:05:05.443 END TEST locking_app_on_locked_coremask 00:05:05.443 ************************************ 00:05:05.443 17:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.443 17:08:48 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:05.443 17:08:48 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:05.443 17:08:48 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:05.443 17:08:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.443 ************************************ 00:05:05.443 START TEST locking_overlapped_coremask 00:05:05.443 ************************************ 00:05:05.443 17:08:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:05.443 17:08:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59175 00:05:05.443 17:08:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59175 /var/tmp/spdk.sock 00:05:05.443 17:08:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:05.443 17:08:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59175 ']' 00:05:05.443 17:08:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.443 17:08:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:05.443 17:08:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.443 17:08:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:05.443 17:08:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.443 [2024-10-30 17:08:48.282682] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:05:05.443 [2024-10-30 17:08:48.282793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59175 ] 00:05:05.700 [2024-10-30 17:08:48.436701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:05.700 [2024-10-30 17:08:48.535518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.700 [2024-10-30 17:08:48.535711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.700 [2024-10-30 17:08:48.535722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.306 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:06.306 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:06.306 17:08:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59193 00:05:06.306 17:08:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59193 /var/tmp/spdk2.sock 00:05:06.306 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:06.306 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59193 /var/tmp/spdk2.sock 00:05:06.306 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:06.306 17:08:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:06.306 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.306 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:06.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:06.306 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.306 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59193 /var/tmp/spdk2.sock 00:05:06.306 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59193 ']' 00:05:06.306 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:06.306 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:06.306 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:06.306 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:06.306 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.306 [2024-10-30 17:08:49.263397] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:05:06.306 [2024-10-30 17:08:49.263515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59193 ] 00:05:06.563 [2024-10-30 17:08:49.436297] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59175 has claimed it. 00:05:06.563 [2024-10-30 17:08:49.436349] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:07.130 ERROR: process (pid: 59193) is no longer running 00:05:07.130 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59193) - No such process 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59175 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59175 ']' 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59175 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59175 00:05:07.130 killing process with pid 59175 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59175' 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59175 00:05:07.130 17:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59175 00:05:08.501 ************************************ 00:05:08.501 END TEST locking_overlapped_coremask 00:05:08.501 ************************************ 00:05:08.501 00:05:08.501 real 0m2.869s 00:05:08.501 user 0m7.783s 00:05:08.501 sys 0m0.412s 00:05:08.501 17:08:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:08.501 17:08:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.501 17:08:51 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:08.501 17:08:51 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:08.501 17:08:51 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:08.501 17:08:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.501 ************************************ 00:05:08.501 START TEST locking_overlapped_coremask_via_rpc 00:05:08.501 ************************************ 00:05:08.501 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:08.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.501 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59246 00:05:08.501 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59246 /var/tmp/spdk.sock 00:05:08.501 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59246 ']' 00:05:08.501 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.501 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:08.501 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.501 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:08.501 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.501 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:08.501 [2024-10-30 17:08:51.176920] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:05:08.501 [2024-10-30 17:08:51.177379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59246 ] 00:05:08.501 [2024-10-30 17:08:51.327662] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:08.501 [2024-10-30 17:08:51.327802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:08.501 [2024-10-30 17:08:51.406672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.501 [2024-10-30 17:08:51.406941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.501 [2024-10-30 17:08:51.406964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.067 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:09.067 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:09.067 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59259 00:05:09.067 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:09.067 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59259 /var/tmp/spdk2.sock 00:05:09.067 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59259 ']' 00:05:09.067 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.067 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:09.067 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.067 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:09.067 17:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.067 [2024-10-30 17:08:52.047021] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:05:09.067 [2024-10-30 17:08:52.047275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59259 ] 00:05:09.324 [2024-10-30 17:08:52.210416] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:09.324 [2024-10-30 17:08:52.210450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:09.582 [2024-10-30 17:08:52.368909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.582 [2024-10-30 17:08:52.369066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.582 [2024-10-30 17:08:52.369096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.517 [2024-10-30 17:08:53.280304] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59246 has claimed it. 00:05:10.517 request: 00:05:10.517 { 00:05:10.517 "method": "framework_enable_cpumask_locks", 00:05:10.517 "req_id": 1 00:05:10.517 } 00:05:10.517 Got JSON-RPC error response 00:05:10.517 response: 00:05:10.517 { 00:05:10.517 "code": -32603, 00:05:10.517 "message": "Failed to claim CPU core: 2" 00:05:10.517 } 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59246 /var/tmp/spdk.sock 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59246 ']' 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:10.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59259 /var/tmp/spdk2.sock 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59259 ']' 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:10.517 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.776 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:10.776 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:10.776 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:10.776 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:10.776 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:10.776 ************************************ 00:05:10.776 END TEST locking_overlapped_coremask_via_rpc 00:05:10.776 ************************************ 00:05:10.776 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:10.776 00:05:10.776 real 0m2.550s 00:05:10.776 user 0m0.901s 00:05:10.776 sys 0m0.107s 00:05:10.776 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:10.776 17:08:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.776 17:08:53 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:10.776 17:08:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59246 ]] 00:05:10.776 17:08:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59246 00:05:10.776 17:08:53 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59246 ']' 00:05:10.776 17:08:53 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59246 00:05:10.776 17:08:53 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:10.776 17:08:53 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:10.776 17:08:53 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59246 00:05:10.776 killing process with pid 59246 00:05:10.776 17:08:53 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:10.776 17:08:53 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:10.776 17:08:53 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59246' 00:05:10.776 17:08:53 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59246 00:05:10.776 17:08:53 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59246 00:05:12.152 17:08:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59259 ]] 00:05:12.152 17:08:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59259 00:05:12.152 17:08:54 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59259 ']' 00:05:12.152 17:08:54 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59259 00:05:12.152 17:08:54 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:12.152 17:08:54 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:12.152 17:08:54 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59259 00:05:12.152 17:08:54 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:12.152 17:08:54 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:12.152 17:08:54 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59259' 00:05:12.152 killing process with pid 59259 00:05:12.152 17:08:54 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59259 00:05:12.152 17:08:54 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59259 00:05:13.086 17:08:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:13.086 Process with pid 59246 is not found 00:05:13.086 17:08:56 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:13.086 17:08:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59246 ]] 00:05:13.086 17:08:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59246 00:05:13.086 17:08:56 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59246 ']' 00:05:13.086 17:08:56 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59246 00:05:13.086 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59246) - No such process 00:05:13.086 17:08:56 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59246 is not found' 00:05:13.086 17:08:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59259 ]] 00:05:13.086 17:08:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59259 00:05:13.086 Process with pid 59259 is not found 00:05:13.086 17:08:56 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59259 ']' 00:05:13.086 17:08:56 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59259 00:05:13.086 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59259) - No such process 00:05:13.086 17:08:56 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59259 is not found' 00:05:13.086 17:08:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:13.086 ************************************ 00:05:13.086 END TEST cpu_locks 00:05:13.086 ************************************ 00:05:13.086 00:05:13.086 real 0m27.790s 00:05:13.086 user 0m47.449s 00:05:13.086 sys 0m4.172s 00:05:13.086 17:08:56 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:13.086 17:08:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.344 ************************************ 00:05:13.344 END TEST event 00:05:13.344 ************************************ 00:05:13.344 00:05:13.344 real 0m55.035s 00:05:13.344 user 1m41.426s 00:05:13.344 sys 0m6.933s 00:05:13.344 17:08:56 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:13.344 17:08:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.344 17:08:56 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:13.344 17:08:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:13.344 17:08:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:13.344 17:08:56 -- common/autotest_common.sh@10 -- # set +x 00:05:13.344 ************************************ 00:05:13.344 START TEST thread 00:05:13.344 ************************************ 00:05:13.344 17:08:56 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:13.344 * Looking for test storage... 00:05:13.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:13.344 17:08:56 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:13.344 17:08:56 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:13.344 17:08:56 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:13.344 17:08:56 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:13.344 17:08:56 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.344 17:08:56 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.344 17:08:56 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.344 17:08:56 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.344 17:08:56 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.344 17:08:56 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.344 17:08:56 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.344 17:08:56 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.344 17:08:56 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.344 17:08:56 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.344 17:08:56 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.344 17:08:56 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:13.344 17:08:56 thread -- scripts/common.sh@345 -- # : 1 00:05:13.344 17:08:56 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.344 17:08:56 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.344 17:08:56 thread -- scripts/common.sh@365 -- # decimal 1 00:05:13.345 17:08:56 thread -- scripts/common.sh@353 -- # local d=1 00:05:13.345 17:08:56 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.345 17:08:56 thread -- scripts/common.sh@355 -- # echo 1 00:05:13.345 17:08:56 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.345 17:08:56 thread -- scripts/common.sh@366 -- # decimal 2 00:05:13.345 17:08:56 thread -- scripts/common.sh@353 -- # local d=2 00:05:13.345 17:08:56 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.345 17:08:56 thread -- scripts/common.sh@355 -- # echo 2 00:05:13.345 17:08:56 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.345 17:08:56 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.345 17:08:56 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.345 17:08:56 thread -- scripts/common.sh@368 -- # return 0 00:05:13.345 17:08:56 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.345 17:08:56 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:13.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.345 --rc genhtml_branch_coverage=1 00:05:13.345 --rc genhtml_function_coverage=1 00:05:13.345 --rc genhtml_legend=1 00:05:13.345 --rc geninfo_all_blocks=1 00:05:13.345 --rc geninfo_unexecuted_blocks=1 00:05:13.345 00:05:13.345 ' 00:05:13.345 17:08:56 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:13.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.345 --rc genhtml_branch_coverage=1 00:05:13.345 --rc genhtml_function_coverage=1 00:05:13.345 --rc genhtml_legend=1 00:05:13.345 --rc geninfo_all_blocks=1 00:05:13.345 --rc geninfo_unexecuted_blocks=1 00:05:13.345 00:05:13.345 ' 00:05:13.345 17:08:56 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:13.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.345 --rc genhtml_branch_coverage=1 00:05:13.345 --rc genhtml_function_coverage=1 00:05:13.345 --rc genhtml_legend=1 00:05:13.345 --rc geninfo_all_blocks=1 00:05:13.345 --rc geninfo_unexecuted_blocks=1 00:05:13.345 00:05:13.345 ' 00:05:13.345 17:08:56 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:13.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.345 --rc genhtml_branch_coverage=1 00:05:13.345 --rc genhtml_function_coverage=1 00:05:13.345 --rc genhtml_legend=1 00:05:13.345 --rc geninfo_all_blocks=1 00:05:13.345 --rc geninfo_unexecuted_blocks=1 00:05:13.345 00:05:13.345 ' 00:05:13.345 17:08:56 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:13.345 17:08:56 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:13.345 17:08:56 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:13.345 17:08:56 thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.345 ************************************ 00:05:13.345 START TEST thread_poller_perf 00:05:13.345 ************************************ 00:05:13.345 17:08:56 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:13.345 [2024-10-30 17:08:56.298438] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:05:13.345 [2024-10-30 17:08:56.298564] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59413 ] 00:05:13.602 [2024-10-30 17:08:56.464531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.602 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:13.602 [2024-10-30 17:08:56.556673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.976 [2024-10-30T17:08:57.957Z] ====================================== 00:05:14.976 [2024-10-30T17:08:57.957Z] busy:2610487336 (cyc) 00:05:14.976 [2024-10-30T17:08:57.957Z] total_run_count: 307000 00:05:14.976 [2024-10-30T17:08:57.957Z] tsc_hz: 2600000000 (cyc) 00:05:14.976 [2024-10-30T17:08:57.957Z] ====================================== 00:05:14.976 [2024-10-30T17:08:57.957Z] poller_cost: 8503 (cyc), 3270 (nsec) 00:05:14.976 00:05:14.976 real 0m1.443s 00:05:14.976 user 0m1.264s 00:05:14.976 sys 0m0.071s 00:05:14.976 17:08:57 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.976 ************************************ 00:05:14.976 END TEST thread_poller_perf 00:05:14.976 ************************************ 00:05:14.976 17:08:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:14.976 17:08:57 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:14.976 17:08:57 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:14.976 17:08:57 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.976 17:08:57 thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.976 ************************************ 00:05:14.976 START TEST thread_poller_perf 00:05:14.976 ************************************ 00:05:14.976 17:08:57 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:14.976 [2024-10-30 17:08:57.778455] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:05:14.976 [2024-10-30 17:08:57.778763] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59444 ] 00:05:14.976 [2024-10-30 17:08:57.932613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.235 [2024-10-30 17:08:58.026589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.235 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:16.176 [2024-10-30T17:08:59.157Z] ====================================== 00:05:16.176 [2024-10-30T17:08:59.157Z] busy:2603050792 (cyc) 00:05:16.176 [2024-10-30T17:08:59.157Z] total_run_count: 4449000 00:05:16.176 [2024-10-30T17:08:59.157Z] tsc_hz: 2600000000 (cyc) 00:05:16.176 [2024-10-30T17:08:59.157Z] ====================================== 00:05:16.176 [2024-10-30T17:08:59.157Z] poller_cost: 585 (cyc), 225 (nsec) 00:05:16.176 00:05:16.176 real 0m1.392s 00:05:16.176 user 0m1.233s 00:05:16.176 sys 0m0.052s 00:05:16.176 17:08:59 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:16.176 17:08:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.176 ************************************ 00:05:16.176 END TEST thread_poller_perf 00:05:16.176 ************************************ 00:05:16.435 17:08:59 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:16.435 00:05:16.435 real 0m3.040s 00:05:16.435 user 0m2.601s 00:05:16.435 sys 0m0.229s 00:05:16.435 17:08:59 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:16.435 17:08:59 thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.435 ************************************ 00:05:16.435 END TEST thread 00:05:16.435 ************************************ 00:05:16.435 17:08:59 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:16.435 17:08:59 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:16.435 17:08:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:16.435 17:08:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:16.435 17:08:59 -- common/autotest_common.sh@10 -- # set +x 00:05:16.435 ************************************ 00:05:16.435 START TEST app_cmdline 00:05:16.435 ************************************ 00:05:16.435 17:08:59 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:16.435 * Looking for test storage... 00:05:16.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:16.435 17:08:59 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:16.435 17:08:59 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:16.435 17:08:59 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:16.435 17:08:59 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.435 17:08:59 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:16.435 17:08:59 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.435 17:08:59 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:16.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.435 --rc genhtml_branch_coverage=1 00:05:16.435 --rc genhtml_function_coverage=1 00:05:16.435 --rc genhtml_legend=1 00:05:16.435 --rc geninfo_all_blocks=1 00:05:16.435 --rc geninfo_unexecuted_blocks=1 00:05:16.435 00:05:16.435 ' 00:05:16.435 17:08:59 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:16.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.435 --rc genhtml_branch_coverage=1 00:05:16.435 --rc genhtml_function_coverage=1 00:05:16.435 --rc genhtml_legend=1 00:05:16.435 --rc geninfo_all_blocks=1 00:05:16.435 --rc geninfo_unexecuted_blocks=1 00:05:16.435 00:05:16.435 ' 00:05:16.435 17:08:59 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:16.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.435 --rc genhtml_branch_coverage=1 00:05:16.435 --rc genhtml_function_coverage=1 00:05:16.435 --rc genhtml_legend=1 00:05:16.435 --rc geninfo_all_blocks=1 00:05:16.435 --rc geninfo_unexecuted_blocks=1 00:05:16.435 00:05:16.435 ' 00:05:16.435 17:08:59 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:16.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.435 --rc genhtml_branch_coverage=1 00:05:16.435 --rc genhtml_function_coverage=1 00:05:16.435 --rc genhtml_legend=1 00:05:16.435 --rc geninfo_all_blocks=1 00:05:16.435 --rc geninfo_unexecuted_blocks=1 00:05:16.435 00:05:16.435 ' 00:05:16.435 17:08:59 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:16.435 17:08:59 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59533 00:05:16.435 17:08:59 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59533 00:05:16.435 17:08:59 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59533 ']' 00:05:16.435 17:08:59 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.435 17:08:59 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:16.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.435 17:08:59 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.435 17:08:59 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:16.435 17:08:59 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:16.435 17:08:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:16.694 [2024-10-30 17:08:59.417339] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:05:16.694 [2024-10-30 17:08:59.417457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59533 ] 00:05:16.694 [2024-10-30 17:08:59.571970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.694 [2024-10-30 17:08:59.648230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.261 17:09:00 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:17.261 17:09:00 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:17.261 17:09:00 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:17.520 { 00:05:17.520 "version": "SPDK v25.01-pre git sha1 12fc2abf1", 00:05:17.520 "fields": { 00:05:17.520 "major": 25, 00:05:17.520 "minor": 1, 00:05:17.520 "patch": 0, 00:05:17.520 "suffix": "-pre", 00:05:17.520 "commit": "12fc2abf1" 00:05:17.520 } 00:05:17.520 } 00:05:17.520 17:09:00 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:17.520 17:09:00 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:17.520 17:09:00 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:17.520 17:09:00 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:17.520 17:09:00 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:17.520 17:09:00 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:17.520 17:09:00 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:17.520 17:09:00 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.520 17:09:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:17.520 17:09:00 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.520 17:09:00 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:17.520 17:09:00 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:17.520 17:09:00 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:17.520 17:09:00 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:17.520 17:09:00 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:17.520 17:09:00 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:17.520 17:09:00 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:17.520 17:09:00 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:17.520 17:09:00 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:17.520 17:09:00 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:17.520 17:09:00 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:17.520 17:09:00 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:17.520 17:09:00 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:17.520 17:09:00 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:17.779 request: 00:05:17.779 { 00:05:17.779 "method": "env_dpdk_get_mem_stats", 00:05:17.779 "req_id": 1 00:05:17.779 } 00:05:17.779 Got JSON-RPC error response 00:05:17.779 response: 00:05:17.779 { 00:05:17.779 "code": -32601, 00:05:17.779 "message": "Method not found" 00:05:17.779 } 00:05:17.779 17:09:00 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:17.779 17:09:00 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:17.779 17:09:00 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:17.779 17:09:00 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:17.779 17:09:00 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59533 00:05:17.779 17:09:00 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59533 ']' 00:05:17.779 17:09:00 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59533 00:05:17.779 17:09:00 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:17.779 17:09:00 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:17.779 17:09:00 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59533 00:05:17.779 17:09:00 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:17.779 17:09:00 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:17.779 killing process with pid 59533 00:05:17.779 17:09:00 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59533' 00:05:17.779 17:09:00 app_cmdline -- common/autotest_common.sh@971 -- # kill 59533 00:05:17.779 17:09:00 app_cmdline -- common/autotest_common.sh@976 -- # wait 59533 00:05:19.154 ************************************ 00:05:19.154 END TEST app_cmdline 00:05:19.154 ************************************ 00:05:19.154 00:05:19.154 real 0m2.646s 00:05:19.154 user 0m3.002s 00:05:19.154 sys 0m0.395s 00:05:19.154 17:09:01 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:19.154 17:09:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:19.154 17:09:01 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:19.154 17:09:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:19.154 17:09:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:19.154 17:09:01 -- common/autotest_common.sh@10 -- # set +x 00:05:19.154 ************************************ 00:05:19.154 START TEST version 00:05:19.154 ************************************ 00:05:19.154 17:09:01 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:19.154 * Looking for test storage... 00:05:19.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:19.154 17:09:01 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:19.154 17:09:01 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:19.154 17:09:01 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:19.154 17:09:02 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:19.154 17:09:02 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.154 17:09:02 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.154 17:09:02 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.154 17:09:02 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.154 17:09:02 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.154 17:09:02 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.154 17:09:02 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.154 17:09:02 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.154 17:09:02 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.154 17:09:02 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.154 17:09:02 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.154 17:09:02 version -- scripts/common.sh@344 -- # case "$op" in 00:05:19.154 17:09:02 version -- scripts/common.sh@345 -- # : 1 00:05:19.154 17:09:02 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.154 17:09:02 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.154 17:09:02 version -- scripts/common.sh@365 -- # decimal 1 00:05:19.154 17:09:02 version -- scripts/common.sh@353 -- # local d=1 00:05:19.154 17:09:02 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.154 17:09:02 version -- scripts/common.sh@355 -- # echo 1 00:05:19.154 17:09:02 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.154 17:09:02 version -- scripts/common.sh@366 -- # decimal 2 00:05:19.154 17:09:02 version -- scripts/common.sh@353 -- # local d=2 00:05:19.154 17:09:02 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.154 17:09:02 version -- scripts/common.sh@355 -- # echo 2 00:05:19.154 17:09:02 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.154 17:09:02 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.154 17:09:02 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.154 17:09:02 version -- scripts/common.sh@368 -- # return 0 00:05:19.154 17:09:02 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.154 17:09:02 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:19.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.154 --rc genhtml_branch_coverage=1 00:05:19.154 --rc genhtml_function_coverage=1 00:05:19.154 --rc genhtml_legend=1 00:05:19.154 --rc geninfo_all_blocks=1 00:05:19.154 --rc geninfo_unexecuted_blocks=1 00:05:19.154 00:05:19.154 ' 00:05:19.154 17:09:02 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:19.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.154 --rc genhtml_branch_coverage=1 00:05:19.154 --rc genhtml_function_coverage=1 00:05:19.154 --rc genhtml_legend=1 00:05:19.154 --rc geninfo_all_blocks=1 00:05:19.154 --rc geninfo_unexecuted_blocks=1 00:05:19.154 00:05:19.154 ' 00:05:19.154 17:09:02 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:19.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.154 --rc genhtml_branch_coverage=1 00:05:19.154 --rc genhtml_function_coverage=1 00:05:19.154 --rc genhtml_legend=1 00:05:19.154 --rc geninfo_all_blocks=1 00:05:19.154 --rc geninfo_unexecuted_blocks=1 00:05:19.154 00:05:19.154 ' 00:05:19.154 17:09:02 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:19.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.154 --rc genhtml_branch_coverage=1 00:05:19.154 --rc genhtml_function_coverage=1 00:05:19.154 --rc genhtml_legend=1 00:05:19.155 --rc geninfo_all_blocks=1 00:05:19.155 --rc geninfo_unexecuted_blocks=1 00:05:19.155 00:05:19.155 ' 00:05:19.155 17:09:02 version -- app/version.sh@17 -- # get_header_version major 00:05:19.155 17:09:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:19.155 17:09:02 version -- app/version.sh@14 -- # tr -d '"' 00:05:19.155 17:09:02 version -- app/version.sh@14 -- # cut -f2 00:05:19.155 17:09:02 version -- app/version.sh@17 -- # major=25 00:05:19.155 17:09:02 version -- app/version.sh@18 -- # get_header_version minor 00:05:19.155 17:09:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:19.155 17:09:02 version -- app/version.sh@14 -- # cut -f2 00:05:19.155 17:09:02 version -- app/version.sh@14 -- # tr -d '"' 00:05:19.155 17:09:02 version -- app/version.sh@18 -- # minor=1 00:05:19.155 17:09:02 version -- app/version.sh@19 -- # get_header_version patch 00:05:19.155 17:09:02 version -- app/version.sh@14 -- # cut -f2 00:05:19.155 17:09:02 version -- app/version.sh@14 -- # tr -d '"' 00:05:19.155 17:09:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:19.155 17:09:02 version -- app/version.sh@19 -- # patch=0 00:05:19.155 17:09:02 version -- app/version.sh@20 -- # get_header_version suffix 00:05:19.155 17:09:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:19.155 17:09:02 version -- app/version.sh@14 -- # cut -f2 00:05:19.155 17:09:02 version -- app/version.sh@14 -- # tr -d '"' 00:05:19.155 17:09:02 version -- app/version.sh@20 -- # suffix=-pre 00:05:19.155 17:09:02 version -- app/version.sh@22 -- # version=25.1 00:05:19.155 17:09:02 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:19.155 17:09:02 version -- app/version.sh@28 -- # version=25.1rc0 00:05:19.155 17:09:02 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:19.155 17:09:02 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:19.155 17:09:02 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:19.155 17:09:02 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:19.155 00:05:19.155 real 0m0.169s 00:05:19.155 user 0m0.112s 00:05:19.155 sys 0m0.079s 00:05:19.155 17:09:02 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:19.155 17:09:02 version -- common/autotest_common.sh@10 -- # set +x 00:05:19.155 ************************************ 00:05:19.155 END TEST version 00:05:19.155 ************************************ 00:05:19.155 17:09:02 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:19.155 17:09:02 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:19.155 17:09:02 -- spdk/autotest.sh@194 -- # uname -s 00:05:19.155 17:09:02 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:19.155 17:09:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:19.155 17:09:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:19.155 17:09:02 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:19.155 17:09:02 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:19.155 17:09:02 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:19.155 17:09:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:19.155 17:09:02 -- common/autotest_common.sh@10 -- # set +x 00:05:19.155 ************************************ 00:05:19.155 START TEST blockdev_nvme 00:05:19.155 ************************************ 00:05:19.155 17:09:02 blockdev_nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:19.413 * Looking for test storage... 00:05:19.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:19.413 17:09:02 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:19.413 17:09:02 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:05:19.414 17:09:02 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:19.414 17:09:02 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.414 17:09:02 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:19.414 17:09:02 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.414 17:09:02 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:19.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.414 --rc genhtml_branch_coverage=1 00:05:19.414 --rc genhtml_function_coverage=1 00:05:19.414 --rc genhtml_legend=1 00:05:19.414 --rc geninfo_all_blocks=1 00:05:19.414 --rc geninfo_unexecuted_blocks=1 00:05:19.414 00:05:19.414 ' 00:05:19.414 17:09:02 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:19.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.414 --rc genhtml_branch_coverage=1 00:05:19.414 --rc genhtml_function_coverage=1 00:05:19.414 --rc genhtml_legend=1 00:05:19.414 --rc geninfo_all_blocks=1 00:05:19.414 --rc geninfo_unexecuted_blocks=1 00:05:19.414 00:05:19.414 ' 00:05:19.414 17:09:02 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:19.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.414 --rc genhtml_branch_coverage=1 00:05:19.414 --rc genhtml_function_coverage=1 00:05:19.414 --rc genhtml_legend=1 00:05:19.414 --rc geninfo_all_blocks=1 00:05:19.414 --rc geninfo_unexecuted_blocks=1 00:05:19.414 00:05:19.414 ' 00:05:19.414 17:09:02 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:19.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.414 --rc genhtml_branch_coverage=1 00:05:19.414 --rc genhtml_function_coverage=1 00:05:19.414 --rc genhtml_legend=1 00:05:19.414 --rc geninfo_all_blocks=1 00:05:19.414 --rc geninfo_unexecuted_blocks=1 00:05:19.414 00:05:19.414 ' 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:19.414 17:09:02 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59700 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:19.414 17:09:02 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59700 00:05:19.414 17:09:02 blockdev_nvme -- common/autotest_common.sh@833 -- # '[' -z 59700 ']' 00:05:19.414 17:09:02 blockdev_nvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.414 17:09:02 blockdev_nvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:19.414 17:09:02 blockdev_nvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.414 17:09:02 blockdev_nvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:19.414 17:09:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:19.414 [2024-10-30 17:09:02.319487] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:05:19.414 [2024-10-30 17:09:02.319605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59700 ] 00:05:19.673 [2024-10-30 17:09:02.470397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.673 [2024-10-30 17:09:02.546094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.240 17:09:03 blockdev_nvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:20.240 17:09:03 blockdev_nvme -- common/autotest_common.sh@866 -- # return 0 00:05:20.240 17:09:03 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:05:20.240 17:09:03 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:05:20.240 17:09:03 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:20.240 17:09:03 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:20.240 17:09:03 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:20.240 17:09:03 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:20.240 17:09:03 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.240 17:09:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:20.498 17:09:03 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.498 17:09:03 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:05:20.498 17:09:03 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.498 17:09:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:20.758 17:09:03 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.758 17:09:03 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:05:20.758 17:09:03 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:05:20.758 17:09:03 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.758 17:09:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:20.758 17:09:03 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.758 17:09:03 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:05:20.758 17:09:03 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.758 17:09:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:20.758 17:09:03 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.758 17:09:03 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:20.758 17:09:03 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.758 17:09:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:20.758 17:09:03 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.758 17:09:03 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:05:20.758 17:09:03 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:05:20.758 17:09:03 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:05:20.758 17:09:03 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.758 17:09:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:20.758 17:09:03 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.758 17:09:03 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:05:20.758 17:09:03 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:05:20.759 17:09:03 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "fd132598-371c-46f7-bd3a-037cb843b880"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "fd132598-371c-46f7-bd3a-037cb843b880",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "b5d907a4-eea1-4645-a48b-0ab7ab2c0a81"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b5d907a4-eea1-4645-a48b-0ab7ab2c0a81",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "fa22e09e-845e-4bdc-8292-68317343cd18"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fa22e09e-845e-4bdc-8292-68317343cd18",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "0b3d67db-3069-4529-a312-11028bf404b0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0b3d67db-3069-4529-a312-11028bf404b0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "0a18a06a-ddaf-4d33-acc1-1a45576f762a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0a18a06a-ddaf-4d33-acc1-1a45576f762a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "376e3d4b-7a35-4cec-9ce1-c54ad12b518c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "376e3d4b-7a35-4cec-9ce1-c54ad12b518c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:20.759 17:09:03 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:05:20.759 17:09:03 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:05:20.759 17:09:03 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:05:20.759 17:09:03 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59700 00:05:20.759 17:09:03 blockdev_nvme -- common/autotest_common.sh@952 -- # '[' -z 59700 ']' 00:05:20.759 17:09:03 blockdev_nvme -- common/autotest_common.sh@956 -- # kill -0 59700 00:05:20.759 17:09:03 blockdev_nvme -- common/autotest_common.sh@957 -- # uname 00:05:20.759 17:09:03 blockdev_nvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:20.759 17:09:03 blockdev_nvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59700 00:05:20.759 17:09:03 blockdev_nvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:20.759 17:09:03 blockdev_nvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:20.759 killing process with pid 59700 00:05:20.759 17:09:03 blockdev_nvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59700' 00:05:20.759 17:09:03 blockdev_nvme -- common/autotest_common.sh@971 -- # kill 59700 00:05:20.759 17:09:03 blockdev_nvme -- common/autotest_common.sh@976 -- # wait 59700 00:05:22.137 17:09:04 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:22.137 17:09:04 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:22.137 17:09:04 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:05:22.137 17:09:04 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:22.137 17:09:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:22.137 ************************************ 00:05:22.137 START TEST bdev_hello_world 00:05:22.137 ************************************ 00:05:22.137 17:09:04 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:22.137 [2024-10-30 17:09:04.851415] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:05:22.137 [2024-10-30 17:09:04.851532] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59778 ] 00:05:22.137 [2024-10-30 17:09:05.006934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.137 [2024-10-30 17:09:05.081389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.704 [2024-10-30 17:09:05.567293] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:22.704 [2024-10-30 17:09:05.567334] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:22.704 [2024-10-30 17:09:05.567351] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:22.704 [2024-10-30 17:09:05.573372] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:22.704 [2024-10-30 17:09:05.574258] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:22.704 [2024-10-30 17:09:05.574337] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:22.704 [2024-10-30 17:09:05.574746] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:22.704 00:05:22.704 [2024-10-30 17:09:05.574828] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:23.640 00:05:23.640 real 0m1.476s 00:05:23.640 user 0m1.218s 00:05:23.640 sys 0m0.153s 00:05:23.640 17:09:06 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:23.640 ************************************ 00:05:23.640 END TEST bdev_hello_world 00:05:23.640 ************************************ 00:05:23.640 17:09:06 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:23.640 17:09:06 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:05:23.640 17:09:06 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:23.640 17:09:06 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:23.640 17:09:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:23.640 ************************************ 00:05:23.640 START TEST bdev_bounds 00:05:23.640 ************************************ 00:05:23.640 17:09:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:05:23.640 Process bdevio pid: 59815 00:05:23.640 17:09:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59815 00:05:23.640 17:09:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.640 17:09:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59815' 00:05:23.640 17:09:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59815 00:05:23.640 17:09:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 59815 ']' 00:05:23.640 17:09:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.640 17:09:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:23.640 17:09:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:23.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.640 17:09:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.640 17:09:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:23.640 17:09:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:23.640 [2024-10-30 17:09:06.364892] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:05:23.640 [2024-10-30 17:09:06.365014] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59815 ] 00:05:23.640 [2024-10-30 17:09:06.525269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.897 [2024-10-30 17:09:06.622161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.897 [2024-10-30 17:09:06.622368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.897 [2024-10-30 17:09:06.622551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.464 17:09:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:24.464 17:09:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:05:24.464 17:09:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:24.464 I/O targets: 00:05:24.464 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:24.464 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:24.464 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:24.464 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:24.464 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:24.464 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:24.464 00:05:24.464 00:05:24.464 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.464 http://cunit.sourceforge.net/ 00:05:24.464 00:05:24.464 00:05:24.464 Suite: bdevio tests on: Nvme3n1 00:05:24.464 Test: blockdev write read block ...passed 00:05:24.464 Test: blockdev write zeroes read block ...passed 00:05:24.464 Test: blockdev write zeroes read no split ...passed 00:05:24.464 Test: blockdev write zeroes read split ...passed 00:05:24.464 Test: blockdev write zeroes read split partial ...passed 00:05:24.464 Test: blockdev reset ...[2024-10-30 17:09:07.313062] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:05:24.464 [2024-10-30 17:09:07.315838] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:05:24.464 passed 00:05:24.464 Test: blockdev write read 8 blocks ...passed 00:05:24.464 Test: blockdev write read size > 128k ...passed 00:05:24.464 Test: blockdev write read invalid size ...passed 00:05:24.464 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:24.465 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:24.465 Test: blockdev write read max offset ...passed 00:05:24.465 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:24.465 Test: blockdev writev readv 8 blocks ...passed 00:05:24.465 Test: blockdev writev readv 30 x 1block ...passed 00:05:24.465 Test: blockdev writev readv block ...passed 00:05:24.465 Test: blockdev writev readv size > 128k ...passed 00:05:24.465 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:24.465 Test: blockdev comparev and writev ...[2024-10-30 17:09:07.322081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b400a000 len:0x1000 00:05:24.465 [2024-10-30 17:09:07.322211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:24.465 passed 00:05:24.465 Test: blockdev nvme passthru rw ...passed 00:05:24.465 Test: blockdev nvme passthru vendor specific ...[2024-10-30 17:09:07.322808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:24.465 [2024-10-30 17:09:07.322890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:24.465 passed 00:05:24.465 Test: blockdev nvme admin passthru ...passed 00:05:24.465 Test: blockdev copy ...passed 00:05:24.465 Suite: bdevio tests on: Nvme2n3 00:05:24.465 Test: blockdev write read block ...passed 00:05:24.465 Test: blockdev write zeroes read block ...passed 00:05:24.465 Test: blockdev write zeroes read no split ...passed 00:05:24.465 Test: blockdev write zeroes read split ...passed 00:05:24.465 Test: blockdev write zeroes read split partial ...passed 00:05:24.465 Test: blockdev reset ...[2024-10-30 17:09:07.364605] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:24.465 [2024-10-30 17:09:07.367588] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:24.465 passed 00:05:24.465 Test: blockdev write read 8 blocks ...passed 00:05:24.465 Test: blockdev write read size > 128k ...passed 00:05:24.465 Test: blockdev write read invalid size ...passed 00:05:24.465 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:24.465 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:24.465 Test: blockdev write read max offset ...passed 00:05:24.465 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:24.465 Test: blockdev writev readv 8 blocks ...passed 00:05:24.465 Test: blockdev writev readv 30 x 1block ...passed 00:05:24.465 Test: blockdev writev readv block ...passed 00:05:24.465 Test: blockdev writev readv size > 128k ...passed 00:05:24.465 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:24.465 Test: blockdev comparev and writev ...[2024-10-30 17:09:07.373000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x297a06000 len:0x1000 00:05:24.465 [2024-10-30 17:09:07.373099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:24.465 passed 00:05:24.465 Test: blockdev nvme passthru rw ...passed 00:05:24.465 Test: blockdev nvme passthru vendor specific ...[2024-10-30 17:09:07.373607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:24.465 [2024-10-30 17:09:07.373688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:24.465 passed 00:05:24.465 Test: blockdev nvme admin passthru ...passed 00:05:24.465 Test: blockdev copy ...passed 00:05:24.465 Suite: bdevio tests on: Nvme2n2 00:05:24.465 Test: blockdev write read block ...passed 00:05:24.465 Test: blockdev write zeroes read block ...passed 00:05:24.465 Test: blockdev write zeroes read no split ...passed 00:05:24.465 Test: blockdev write zeroes read split ...passed 00:05:24.465 Test: blockdev write zeroes read split partial ...passed 00:05:24.465 Test: blockdev reset ...[2024-10-30 17:09:07.415106] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:24.465 [2024-10-30 17:09:07.418051] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:24.465 passed 00:05:24.465 Test: blockdev write read 8 blocks ...passed 00:05:24.465 Test: blockdev write read size > 128k ...passed 00:05:24.465 Test: blockdev write read invalid size ...passed 00:05:24.465 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:24.465 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:24.465 Test: blockdev write read max offset ...passed 00:05:24.465 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:24.465 Test: blockdev writev readv 8 blocks ...passed 00:05:24.465 Test: blockdev writev readv 30 x 1block ...passed 00:05:24.465 Test: blockdev writev readv block ...passed 00:05:24.465 Test: blockdev writev readv size > 128k ...passed 00:05:24.465 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:24.465 Test: blockdev comparev and writev ...[2024-10-30 17:09:07.423974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d563c000 len:0x1000 00:05:24.465 [2024-10-30 17:09:07.424067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:24.465 passed 00:05:24.465 Test: blockdev nvme passthru rw ...passed 00:05:24.465 Test: blockdev nvme passthru vendor specific ...passed 00:05:24.465 Test: blockdev nvme admin passthru ...[2024-10-30 17:09:07.424658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:24.465 [2024-10-30 17:09:07.424722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:24.465 passed 00:05:24.465 Test: blockdev copy ...passed 00:05:24.465 Suite: bdevio tests on: Nvme2n1 00:05:24.465 Test: blockdev write read block ...passed 00:05:24.465 Test: blockdev write zeroes read block ...passed 00:05:24.465 Test: blockdev write zeroes read no split ...passed 00:05:24.723 Test: blockdev write zeroes read split ...passed 00:05:24.723 Test: blockdev write zeroes read split partial ...passed 00:05:24.724 Test: blockdev reset ...[2024-10-30 17:09:07.467784] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:24.724 [2024-10-30 17:09:07.470635] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:24.724 passed 00:05:24.724 Test: blockdev write read 8 blocks ...passed 00:05:24.724 Test: blockdev write read size > 128k ...passed 00:05:24.724 Test: blockdev write read invalid size ...passed 00:05:24.724 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:24.724 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:24.724 Test: blockdev write read max offset ...passed 00:05:24.724 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:24.724 Test: blockdev writev readv 8 blocks ...passed 00:05:24.724 Test: blockdev writev readv 30 x 1block ...passed 00:05:24.724 Test: blockdev writev readv block ...passed 00:05:24.724 Test: blockdev writev readv size > 128k ...passed 00:05:24.724 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:24.724 Test: blockdev comparev and writev ...[2024-10-30 17:09:07.476354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5638000 len:0x1000 00:05:24.724 [2024-10-30 17:09:07.476443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:24.724 passed 00:05:24.724 Test: blockdev nvme passthru rw ...passed 00:05:24.724 Test: blockdev nvme passthru vendor specific ...[2024-10-30 17:09:07.476960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:24.724 [2024-10-30 17:09:07.477022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:24.724 passed 00:05:24.724 Test: blockdev nvme admin passthru ...passed 00:05:24.724 Test: blockdev copy ...passed 00:05:24.724 Suite: bdevio tests on: Nvme1n1 00:05:24.724 Test: blockdev write read block ...passed 00:05:24.724 Test: blockdev write zeroes read block ...passed 00:05:24.724 Test: blockdev write zeroes read no split ...passed 00:05:24.724 Test: blockdev write zeroes read split ...passed 00:05:24.724 Test: blockdev write zeroes read split partial ...passed 00:05:24.724 Test: blockdev reset ...[2024-10-30 17:09:07.518129] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:05:24.724 [2024-10-30 17:09:07.520888] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:05:24.724 passed 00:05:24.724 Test: blockdev write read 8 blocks ...passed 00:05:24.724 Test: blockdev write read size > 128k ...passed 00:05:24.724 Test: blockdev write read invalid size ...passed 00:05:24.724 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:24.724 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:24.724 Test: blockdev write read max offset ...passed 00:05:24.724 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:24.724 Test: blockdev writev readv 8 blocks ...passed 00:05:24.724 Test: blockdev writev readv 30 x 1block ...passed 00:05:24.724 Test: blockdev writev readv block ...passed 00:05:24.724 Test: blockdev writev readv size > 128k ...passed 00:05:24.724 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:24.724 Test: blockdev comparev and writev ...[2024-10-30 17:09:07.526862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5634000 len:0x1000 00:05:24.724 [2024-10-30 17:09:07.526961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:24.724 passed 00:05:24.724 Test: blockdev nvme passthru rw ...passed 00:05:24.724 Test: blockdev nvme passthru vendor specific ...[2024-10-30 17:09:07.527547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:24.724 [2024-10-30 17:09:07.527611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:24.724 passed 00:05:24.724 Test: blockdev nvme admin passthru ...passed 00:05:24.724 Test: blockdev copy ...passed 00:05:24.724 Suite: bdevio tests on: Nvme0n1 00:05:24.724 Test: blockdev write read block ...passed 00:05:24.724 Test: blockdev write zeroes read block ...passed 00:05:24.724 Test: blockdev write zeroes read no split ...passed 00:05:24.724 Test: blockdev write zeroes read split ...passed 00:05:24.724 Test: blockdev write zeroes read split partial ...passed 00:05:24.724 Test: blockdev reset ...[2024-10-30 17:09:07.569169] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:05:24.724 [2024-10-30 17:09:07.571963] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:05:24.724 passed 00:05:24.724 Test: blockdev write read 8 blocks ...passed 00:05:24.724 Test: blockdev write read size > 128k ...passed 00:05:24.724 Test: blockdev write read invalid size ...passed 00:05:24.724 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:24.724 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:24.724 Test: blockdev write read max offset ...passed 00:05:24.724 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:24.724 Test: blockdev writev readv 8 blocks ...passed 00:05:24.724 Test: blockdev writev readv 30 x 1block ...passed 00:05:24.724 Test: blockdev writev readv block ...passed 00:05:24.724 Test: blockdev writev readv size > 128k ...passed 00:05:24.724 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:24.724 Test: blockdev comparev and writev ...[2024-10-30 17:09:07.576719] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:05:24.724 separate metadata which is not supported yet. 00:05:24.724 passed 00:05:24.724 Test: blockdev nvme passthru rw ...passed 00:05:24.724 Test: blockdev nvme passthru vendor specific ...[2024-10-30 17:09:07.577057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:24.724 [2024-10-30 17:09:07.577092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:05:24.724 passed 00:05:24.724 Test: blockdev nvme admin passthru ...passed 00:05:24.724 Test: blockdev copy ...passed 00:05:24.724 00:05:24.724 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.724 suites 6 6 n/a 0 0 00:05:24.724 tests 138 138 138 0 0 00:05:24.724 asserts 893 893 893 0 n/a 00:05:24.724 00:05:24.724 Elapsed time = 0.838 seconds 00:05:24.724 0 00:05:24.724 17:09:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59815 00:05:24.724 17:09:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 59815 ']' 00:05:24.724 17:09:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 59815 00:05:24.724 17:09:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:05:24.724 17:09:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:24.724 17:09:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59815 00:05:24.724 17:09:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:24.724 17:09:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:24.724 killing process with pid 59815 00:05:24.724 17:09:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59815' 00:05:24.724 17:09:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 59815 00:05:24.724 17:09:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 59815 00:05:25.289 17:09:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:25.289 00:05:25.289 real 0m1.955s 00:05:25.289 user 0m4.953s 00:05:25.289 sys 0m0.269s 00:05:25.289 17:09:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:25.289 17:09:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:25.289 ************************************ 00:05:25.289 END TEST bdev_bounds 00:05:25.289 ************************************ 00:05:25.547 17:09:08 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:25.547 17:09:08 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:25.547 17:09:08 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.547 17:09:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:25.547 ************************************ 00:05:25.547 START TEST bdev_nbd 00:05:25.547 ************************************ 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59869 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59869 /var/tmp/spdk-nbd.sock 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 59869 ']' 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:25.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:25.547 17:09:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:25.547 [2024-10-30 17:09:08.372244] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:05:25.547 [2024-10-30 17:09:08.372356] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:25.805 [2024-10-30 17:09:08.532342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.805 [2024-10-30 17:09:08.628572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.371 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:26.371 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:05:26.371 17:09:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:26.371 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.371 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:26.371 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:05:26.371 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:26.371 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.371 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:26.371 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:05:26.371 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:05:26.371 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:05:26.371 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:05:26.371 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:26.371 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:05:26.629 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:05:26.629 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:05:26.629 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:05:26.629 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:26.629 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:26.629 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:26.629 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:26.629 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:26.629 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:26.629 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:26.629 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:26.629 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:26.629 1+0 records in 00:05:26.629 1+0 records out 00:05:26.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558798 s, 7.3 MB/s 00:05:26.629 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:26.629 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:26.629 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:26.629 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:26.629 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:26.629 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:26.629 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:26.629 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:05:26.886 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:05:26.886 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:05:26.886 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:05:26.886 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:26.886 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:26.886 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:26.886 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:26.886 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:26.886 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:26.886 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:26.886 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:26.886 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:26.886 1+0 records in 00:05:26.886 1+0 records out 00:05:26.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526879 s, 7.8 MB/s 00:05:26.886 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:26.886 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:26.886 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:26.886 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:26.886 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:26.886 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:26.886 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:26.886 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:05:27.144 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:05:27.144 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:05:27.144 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:05:27.144 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:05:27.144 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:27.144 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:27.144 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:27.144 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:05:27.144 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:27.144 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:27.144 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:27.144 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:27.144 1+0 records in 00:05:27.144 1+0 records out 00:05:27.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000966475 s, 4.2 MB/s 00:05:27.144 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:27.144 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:27.144 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:27.144 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:27.144 17:09:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:27.145 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:27.145 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:27.145 17:09:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:27.402 1+0 records in 00:05:27.402 1+0 records out 00:05:27.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011385 s, 3.6 MB/s 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:27.402 1+0 records in 00:05:27.402 1+0 records out 00:05:27.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109507 s, 3.7 MB/s 00:05:27.402 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:27.661 1+0 records in 00:05:27.661 1+0 records out 00:05:27.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000991036 s, 4.1 MB/s 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:27.661 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.919 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:05:27.919 { 00:05:27.919 "nbd_device": "/dev/nbd0", 00:05:27.919 "bdev_name": "Nvme0n1" 00:05:27.919 }, 00:05:27.919 { 00:05:27.919 "nbd_device": "/dev/nbd1", 00:05:27.919 "bdev_name": "Nvme1n1" 00:05:27.919 }, 00:05:27.919 { 00:05:27.919 "nbd_device": "/dev/nbd2", 00:05:27.920 "bdev_name": "Nvme2n1" 00:05:27.920 }, 00:05:27.920 { 00:05:27.920 "nbd_device": "/dev/nbd3", 00:05:27.920 "bdev_name": "Nvme2n2" 00:05:27.920 }, 00:05:27.920 { 00:05:27.920 "nbd_device": "/dev/nbd4", 00:05:27.920 "bdev_name": "Nvme2n3" 00:05:27.920 }, 00:05:27.920 { 00:05:27.920 "nbd_device": "/dev/nbd5", 00:05:27.920 "bdev_name": "Nvme3n1" 00:05:27.920 } 00:05:27.920 ]' 00:05:27.920 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:05:27.920 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:05:27.920 { 00:05:27.920 "nbd_device": "/dev/nbd0", 00:05:27.920 "bdev_name": "Nvme0n1" 00:05:27.920 }, 00:05:27.920 { 00:05:27.920 "nbd_device": "/dev/nbd1", 00:05:27.920 "bdev_name": "Nvme1n1" 00:05:27.920 }, 00:05:27.920 { 00:05:27.920 "nbd_device": "/dev/nbd2", 00:05:27.920 "bdev_name": "Nvme2n1" 00:05:27.920 }, 00:05:27.920 { 00:05:27.920 "nbd_device": "/dev/nbd3", 00:05:27.920 "bdev_name": "Nvme2n2" 00:05:27.920 }, 00:05:27.920 { 00:05:27.920 "nbd_device": "/dev/nbd4", 00:05:27.920 "bdev_name": "Nvme2n3" 00:05:27.920 }, 00:05:27.920 { 00:05:27.920 "nbd_device": "/dev/nbd5", 00:05:27.920 "bdev_name": "Nvme3n1" 00:05:27.920 } 00:05:27.920 ]' 00:05:27.920 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:05:27.920 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:05:27.920 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.920 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:05:27.920 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.920 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:27.920 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.920 17:09:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.178 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.178 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.178 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.178 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.178 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.178 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.178 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:28.178 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.178 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.178 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.437 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.437 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.437 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.437 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.437 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.437 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.437 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:28.437 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.437 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.437 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:05:28.696 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:05:28.696 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:05:28.696 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:05:28.696 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.696 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.696 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:05:28.696 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:28.696 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.696 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.696 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:05:28.953 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:05:28.953 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:05:28.953 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:05:28.953 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.953 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.953 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:05:28.953 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:28.953 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.953 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.953 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:05:28.953 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:05:28.953 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:05:28.953 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:05:28.953 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.953 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.954 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:05:28.954 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:28.954 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.954 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.954 17:09:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:05:29.212 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:05:29.212 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:05:29.212 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:05:29.212 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.212 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.212 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:05:29.212 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:29.212 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.212 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.212 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.212 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:29.470 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.471 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:05:29.471 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.471 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:29.471 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:05:29.729 /dev/nbd0 00:05:29.729 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.729 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.729 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:29.729 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:29.729 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:29.729 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:29.729 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:29.729 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:29.729 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:29.729 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:29.729 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:29.729 1+0 records in 00:05:29.729 1+0 records out 00:05:29.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312917 s, 13.1 MB/s 00:05:29.729 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:29.729 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:29.729 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:29.729 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:29.729 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:29.729 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.729 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:29.729 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:05:29.987 /dev/nbd1 00:05:29.987 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:29.987 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:29.987 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:29.987 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:29.987 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:29.987 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:29.987 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:29.987 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:29.987 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:29.987 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:29.987 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:29.987 1+0 records in 00:05:29.987 1+0 records out 00:05:29.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102666 s, 4.0 MB/s 00:05:29.987 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:29.987 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:29.987 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:29.987 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:29.987 17:09:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:29.987 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.987 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:29.987 17:09:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:05:30.245 /dev/nbd10 00:05:30.245 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:05:30.245 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:05:30.245 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:05:30.245 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:30.245 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:30.245 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:30.245 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:05:30.245 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:30.245 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:30.245 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:30.245 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:30.245 1+0 records in 00:05:30.245 1+0 records out 00:05:30.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100043 s, 4.1 MB/s 00:05:30.245 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:30.245 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:30.245 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:30.245 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:30.245 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:30.245 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.245 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:30.245 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:05:30.503 /dev/nbd11 00:05:30.503 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:05:30.503 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:05:30.503 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:05:30.503 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:30.503 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:30.503 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:30.503 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:05:30.503 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:30.504 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:30.504 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:30.504 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:30.504 1+0 records in 00:05:30.504 1+0 records out 00:05:30.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000781924 s, 5.2 MB/s 00:05:30.504 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:30.504 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:30.504 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:30.504 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:30.504 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:30.504 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.504 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:30.504 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:05:30.762 /dev/nbd12 00:05:30.762 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:05:30.762 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:05:30.762 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:05:30.762 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:30.762 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:30.762 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:30.762 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:05:30.762 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:30.762 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:30.762 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:30.762 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:30.762 1+0 records in 00:05:30.762 1+0 records out 00:05:30.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000994874 s, 4.1 MB/s 00:05:30.762 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:30.762 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:30.762 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:30.762 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:30.762 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:30.762 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.762 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:30.762 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:05:30.762 /dev/nbd13 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:31.021 1+0 records in 00:05:31.021 1+0 records out 00:05:31.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103525 s, 4.0 MB/s 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.021 { 00:05:31.021 "nbd_device": "/dev/nbd0", 00:05:31.021 "bdev_name": "Nvme0n1" 00:05:31.021 }, 00:05:31.021 { 00:05:31.021 "nbd_device": "/dev/nbd1", 00:05:31.021 "bdev_name": "Nvme1n1" 00:05:31.021 }, 00:05:31.021 { 00:05:31.021 "nbd_device": "/dev/nbd10", 00:05:31.021 "bdev_name": "Nvme2n1" 00:05:31.021 }, 00:05:31.021 { 00:05:31.021 "nbd_device": "/dev/nbd11", 00:05:31.021 "bdev_name": "Nvme2n2" 00:05:31.021 }, 00:05:31.021 { 00:05:31.021 "nbd_device": "/dev/nbd12", 00:05:31.021 "bdev_name": "Nvme2n3" 00:05:31.021 }, 00:05:31.021 { 00:05:31.021 "nbd_device": "/dev/nbd13", 00:05:31.021 "bdev_name": "Nvme3n1" 00:05:31.021 } 00:05:31.021 ]' 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.021 17:09:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.021 { 00:05:31.021 "nbd_device": "/dev/nbd0", 00:05:31.021 "bdev_name": "Nvme0n1" 00:05:31.021 }, 00:05:31.021 { 00:05:31.021 "nbd_device": "/dev/nbd1", 00:05:31.021 "bdev_name": "Nvme1n1" 00:05:31.021 }, 00:05:31.021 { 00:05:31.021 "nbd_device": "/dev/nbd10", 00:05:31.021 "bdev_name": "Nvme2n1" 00:05:31.021 }, 00:05:31.021 { 00:05:31.021 "nbd_device": "/dev/nbd11", 00:05:31.021 "bdev_name": "Nvme2n2" 00:05:31.021 }, 00:05:31.021 { 00:05:31.021 "nbd_device": "/dev/nbd12", 00:05:31.022 "bdev_name": "Nvme2n3" 00:05:31.022 }, 00:05:31.022 { 00:05:31.022 "nbd_device": "/dev/nbd13", 00:05:31.022 "bdev_name": "Nvme3n1" 00:05:31.022 } 00:05:31.022 ]' 00:05:31.281 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.281 /dev/nbd1 00:05:31.281 /dev/nbd10 00:05:31.281 /dev/nbd11 00:05:31.281 /dev/nbd12 00:05:31.281 /dev/nbd13' 00:05:31.281 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.281 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.281 /dev/nbd1 00:05:31.281 /dev/nbd10 00:05:31.281 /dev/nbd11 00:05:31.281 /dev/nbd12 00:05:31.281 /dev/nbd13' 00:05:31.281 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:05:31.281 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:05:31.281 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:05:31.281 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:05:31.281 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:05:31.281 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:31.281 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.281 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.281 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:31.281 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.281 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:05:31.281 256+0 records in 00:05:31.281 256+0 records out 00:05:31.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041865 s, 250 MB/s 00:05:31.281 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.281 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.281 256+0 records in 00:05:31.281 256+0 records out 00:05:31.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151355 s, 6.9 MB/s 00:05:31.281 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.281 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.540 256+0 records in 00:05:31.540 256+0 records out 00:05:31.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.224904 s, 4.7 MB/s 00:05:31.540 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.540 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:05:31.799 256+0 records in 00:05:31.799 256+0 records out 00:05:31.799 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.221715 s, 4.7 MB/s 00:05:31.799 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.799 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:05:32.057 256+0 records in 00:05:32.057 256+0 records out 00:05:32.057 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.233816 s, 4.5 MB/s 00:05:32.057 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.057 17:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:05:32.315 256+0 records in 00:05:32.315 256+0 records out 00:05:32.315 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.182479 s, 5.7 MB/s 00:05:32.315 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.315 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:05:32.315 256+0 records in 00:05:32.315 256+0 records out 00:05:32.315 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.2243 s, 4.7 MB/s 00:05:32.316 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:05:32.316 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:32.316 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.316 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:32.316 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:32.316 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:32.316 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:32.316 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.316 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.575 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:32.834 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:32.834 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.835 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.835 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:32.835 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:32.835 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:32.835 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:32.835 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.835 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.835 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:32.835 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:32.835 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.835 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.835 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:05:33.093 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:05:33.093 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:05:33.093 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:05:33.093 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.093 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.093 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:05:33.093 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:33.093 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.093 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.093 17:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:05:33.352 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:05:33.352 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:05:33.352 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:05:33.352 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.352 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.352 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:05:33.352 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:33.352 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.352 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.352 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.699 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.956 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:33.956 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:33.956 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.956 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:33.956 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.956 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:33.956 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:33.956 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:33.956 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:33.956 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:05:33.956 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:33.956 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:05:33.956 17:09:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:33.956 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.957 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:05:33.957 17:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:05:34.215 malloc_lvol_verify 00:05:34.215 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:05:34.473 9b13d66b-a61b-49ba-b908-5773f4f93bb2 00:05:34.473 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:05:34.473 0e9598a3-cec1-4ac8-8b32-3605c06a6e44 00:05:34.473 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:05:34.731 /dev/nbd0 00:05:34.731 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:05:34.731 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:05:34.731 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:05:34.731 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:05:34.731 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:05:34.731 mke2fs 1.47.0 (5-Feb-2023) 00:05:34.731 Discarding device blocks: 0/4096 done 00:05:34.731 Creating filesystem with 4096 1k blocks and 1024 inodes 00:05:34.731 00:05:34.731 Allocating group tables: 0/1 done 00:05:34.731 Writing inode tables: 0/1 done 00:05:34.731 Creating journal (1024 blocks): done 00:05:34.731 Writing superblocks and filesystem accounting information: 0/1 done 00:05:34.731 00:05:34.731 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:34.731 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.731 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:05:34.731 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.731 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:34.731 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.731 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.989 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.989 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.989 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.989 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.989 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.989 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.989 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:34.989 17:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.989 17:09:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59869 00:05:34.989 17:09:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 59869 ']' 00:05:34.989 17:09:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 59869 00:05:34.989 17:09:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:05:34.989 17:09:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:34.989 17:09:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59869 00:05:34.989 killing process with pid 59869 00:05:34.989 17:09:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:34.989 17:09:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:34.989 17:09:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59869' 00:05:34.989 17:09:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 59869 00:05:34.989 17:09:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 59869 00:05:35.925 17:09:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:05:35.925 00:05:35.925 real 0m10.359s 00:05:35.925 user 0m14.269s 00:05:35.925 sys 0m3.193s 00:05:35.925 17:09:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:35.925 ************************************ 00:05:35.925 END TEST bdev_nbd 00:05:35.925 ************************************ 00:05:35.925 17:09:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:35.925 skipping fio tests on NVMe due to multi-ns failures. 00:05:35.925 17:09:18 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:05:35.925 17:09:18 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:05:35.925 17:09:18 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:05:35.925 17:09:18 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:35.925 17:09:18 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:35.925 17:09:18 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:05:35.925 17:09:18 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:35.925 17:09:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:35.925 ************************************ 00:05:35.925 START TEST bdev_verify 00:05:35.925 ************************************ 00:05:35.925 17:09:18 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:35.925 [2024-10-30 17:09:18.793846] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:05:35.925 [2024-10-30 17:09:18.793961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60247 ] 00:05:36.184 [2024-10-30 17:09:18.955401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.184 [2024-10-30 17:09:19.051647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.184 [2024-10-30 17:09:19.051744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.752 Running I/O for 5 seconds... 00:05:39.059 18432.00 IOPS, 72.00 MiB/s [2024-10-30T17:09:22.974Z] 20224.00 IOPS, 79.00 MiB/s [2024-10-30T17:09:23.908Z] 20416.00 IOPS, 79.75 MiB/s [2024-10-30T17:09:24.843Z] 20160.00 IOPS, 78.75 MiB/s [2024-10-30T17:09:24.843Z] 19891.20 IOPS, 77.70 MiB/s 00:05:41.863 Latency(us) 00:05:41.863 [2024-10-30T17:09:24.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:41.863 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:41.863 Verification LBA range: start 0x0 length 0xbd0bd 00:05:41.863 Nvme0n1 : 5.07 1617.04 6.32 0.00 0.00 78958.69 16938.54 93968.54 00:05:41.863 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:41.863 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:05:41.863 Nvme0n1 : 5.04 1650.09 6.45 0.00 0.00 77266.30 16131.94 88725.66 00:05:41.863 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:41.863 Verification LBA range: start 0x0 length 0xa0000 00:05:41.863 Nvme1n1 : 5.07 1616.05 6.31 0.00 0.00 78838.00 19660.80 79449.80 00:05:41.863 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:41.863 Verification LBA range: start 0xa0000 length 0xa0000 00:05:41.863 Nvme1n1 : 5.04 1649.62 6.44 0.00 0.00 77131.02 17543.48 73803.62 00:05:41.863 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:41.863 Verification LBA range: start 0x0 length 0x80000 00:05:41.863 Nvme2n1 : 5.07 1615.63 6.31 0.00 0.00 78631.06 17845.96 73803.62 00:05:41.863 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:41.863 Verification LBA range: start 0x80000 length 0x80000 00:05:41.863 Nvme2n1 : 5.08 1662.71 6.49 0.00 0.00 76369.35 14619.57 65334.35 00:05:41.863 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:41.863 Verification LBA range: start 0x0 length 0x80000 00:05:41.863 Nvme2n2 : 5.07 1615.20 6.31 0.00 0.00 78467.02 17442.66 68157.44 00:05:41.863 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:41.863 Verification LBA range: start 0x80000 length 0x80000 00:05:41.863 Nvme2n2 : 5.08 1662.01 6.49 0.00 0.00 76208.04 15022.87 66140.95 00:05:41.863 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:41.863 Verification LBA range: start 0x0 length 0x80000 00:05:41.863 Nvme2n3 : 5.07 1614.78 6.31 0.00 0.00 78340.02 17241.01 69367.34 00:05:41.863 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:41.863 Verification LBA range: start 0x80000 length 0x80000 00:05:41.863 Nvme2n3 : 5.08 1661.59 6.49 0.00 0.00 75964.45 15022.87 67350.84 00:05:41.863 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:41.863 Verification LBA range: start 0x0 length 0x20000 00:05:41.863 Nvme3n1 : 5.08 1625.11 6.35 0.00 0.00 77686.14 2470.20 71787.13 00:05:41.863 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:41.863 Verification LBA range: start 0x20000 length 0x20000 00:05:41.863 Nvme3n1 : 5.09 1672.83 6.53 0.00 0.00 75365.87 1310.72 70577.23 00:05:41.863 [2024-10-30T17:09:24.844Z] =================================================================================================================== 00:05:41.863 [2024-10-30T17:09:24.844Z] Total : 19662.66 76.81 0.00 0.00 77418.74 1310.72 93968.54 00:05:42.846 00:05:42.846 real 0m7.059s 00:05:42.846 user 0m13.201s 00:05:42.846 sys 0m0.215s 00:05:42.846 17:09:25 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:42.846 ************************************ 00:05:42.846 END TEST bdev_verify 00:05:42.846 ************************************ 00:05:42.846 17:09:25 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:05:43.117 17:09:25 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:43.117 17:09:25 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:05:43.117 17:09:25 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:43.117 17:09:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:43.117 ************************************ 00:05:43.117 START TEST bdev_verify_big_io 00:05:43.117 ************************************ 00:05:43.117 17:09:25 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:43.117 [2024-10-30 17:09:25.916507] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:05:43.117 [2024-10-30 17:09:25.916617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60345 ] 00:05:43.117 [2024-10-30 17:09:26.077016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.376 [2024-10-30 17:09:26.180417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.377 [2024-10-30 17:09:26.180499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.943 Running I/O for 5 seconds... 00:05:48.602 852.00 IOPS, 53.25 MiB/s [2024-10-30T17:09:32.962Z] 2071.50 IOPS, 129.47 MiB/s [2024-10-30T17:09:32.962Z] 2812.00 IOPS, 175.75 MiB/s 00:05:49.981 Latency(us) 00:05:49.981 [2024-10-30T17:09:32.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:49.981 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:49.981 Verification LBA range: start 0x0 length 0xbd0b 00:05:49.981 Nvme0n1 : 5.63 136.42 8.53 0.00 0.00 904680.50 14417.92 1129235.69 00:05:49.981 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:49.981 Verification LBA range: start 0xbd0b length 0xbd0b 00:05:49.981 Nvme0n1 : 5.66 131.18 8.20 0.00 0.00 931876.33 22383.06 1129235.69 00:05:49.981 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:49.981 Verification LBA range: start 0x0 length 0xa000 00:05:49.981 Nvme1n1 : 5.63 136.37 8.52 0.00 0.00 871506.18 98808.12 935652.43 00:05:49.981 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:49.981 Verification LBA range: start 0xa000 length 0xa000 00:05:49.981 Nvme1n1 : 5.66 135.74 8.48 0.00 0.00 885878.68 85902.57 929199.66 00:05:49.981 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:49.981 Verification LBA range: start 0x0 length 0x8000 00:05:49.981 Nvme2n1 : 5.81 136.95 8.56 0.00 0.00 843023.36 43152.94 1729343.80 00:05:49.981 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:49.981 Verification LBA range: start 0x8000 length 0x8000 00:05:49.981 Nvme2n1 : 5.74 137.69 8.61 0.00 0.00 840992.67 75820.11 838860.80 00:05:49.981 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:49.981 Verification LBA range: start 0x0 length 0x8000 00:05:49.981 Nvme2n2 : 5.89 139.17 8.70 0.00 0.00 797005.80 68560.74 1755154.90 00:05:49.981 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:49.981 Verification LBA range: start 0x8000 length 0x8000 00:05:49.981 Nvme2n2 : 5.78 134.89 8.43 0.00 0.00 833764.63 41136.44 1755154.90 00:05:49.981 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:49.981 Verification LBA range: start 0x0 length 0x8000 00:05:49.981 Nvme2n3 : 5.96 154.74 9.67 0.00 0.00 704409.26 33070.47 1768060.46 00:05:49.981 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:49.981 Verification LBA range: start 0x8000 length 0x8000 00:05:49.981 Nvme2n3 : 5.86 153.00 9.56 0.00 0.00 713180.10 41539.74 987274.63 00:05:49.981 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:49.981 Verification LBA range: start 0x0 length 0x2000 00:05:49.981 Nvme3n1 : 5.97 169.01 10.56 0.00 0.00 624140.65 983.04 1806777.11 00:05:49.981 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:49.981 Verification LBA range: start 0x2000 length 0x2000 00:05:49.981 Nvme3n1 : 5.95 176.52 11.03 0.00 0.00 599968.98 538.78 1013085.74 00:05:49.981 [2024-10-30T17:09:32.962Z] =================================================================================================================== 00:05:49.981 [2024-10-30T17:09:32.962Z] Total : 1741.68 108.85 0.00 0.00 783773.60 538.78 1806777.11 00:05:51.358 00:05:51.358 real 0m8.244s 00:05:51.358 user 0m15.622s 00:05:51.358 sys 0m0.226s 00:05:51.358 17:09:34 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:51.358 17:09:34 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:05:51.358 ************************************ 00:05:51.358 END TEST bdev_verify_big_io 00:05:51.358 ************************************ 00:05:51.358 17:09:34 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:51.358 17:09:34 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:05:51.358 17:09:34 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:51.358 17:09:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:51.358 ************************************ 00:05:51.358 START TEST bdev_write_zeroes 00:05:51.358 ************************************ 00:05:51.358 17:09:34 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:51.358 [2024-10-30 17:09:34.224755] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:05:51.358 [2024-10-30 17:09:34.224873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60454 ] 00:05:51.618 [2024-10-30 17:09:34.384755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.618 [2024-10-30 17:09:34.480030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.190 Running I/O for 1 seconds... 00:05:53.175 74880.00 IOPS, 292.50 MiB/s 00:05:53.175 Latency(us) 00:05:53.175 [2024-10-30T17:09:36.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:53.175 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:53.175 Nvme0n1 : 1.02 12408.54 48.47 0.00 0.00 10294.74 8469.27 20769.87 00:05:53.175 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:53.175 Nvme1n1 : 1.02 12394.24 48.42 0.00 0.00 10293.63 8721.33 20366.57 00:05:53.175 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:53.175 Nvme2n1 : 1.02 12380.30 48.36 0.00 0.00 10289.02 8519.68 20568.22 00:05:53.175 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:53.175 Nvme2n2 : 1.02 12366.37 48.31 0.00 0.00 10287.06 8620.50 19761.62 00:05:53.175 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:53.175 Nvme2n3 : 1.03 12352.27 48.25 0.00 0.00 10246.37 8469.27 18249.26 00:05:53.175 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:53.175 Nvme3n1 : 1.03 12338.41 48.20 0.00 0.00 10222.18 7208.96 20064.10 00:05:53.175 [2024-10-30T17:09:36.156Z] =================================================================================================================== 00:05:53.175 [2024-10-30T17:09:36.156Z] Total : 74240.13 290.00 0.00 0.00 10272.17 7208.96 20769.87 00:05:54.108 00:05:54.108 real 0m2.641s 00:05:54.108 user 0m2.352s 00:05:54.108 sys 0m0.176s 00:05:54.108 17:09:36 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:54.108 17:09:36 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:05:54.108 ************************************ 00:05:54.108 END TEST bdev_write_zeroes 00:05:54.108 ************************************ 00:05:54.108 17:09:36 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:54.108 17:09:36 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:05:54.108 17:09:36 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:54.108 17:09:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:54.108 ************************************ 00:05:54.108 START TEST bdev_json_nonenclosed 00:05:54.108 ************************************ 00:05:54.108 17:09:36 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:54.108 [2024-10-30 17:09:36.904305] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:05:54.108 [2024-10-30 17:09:36.904420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60507 ] 00:05:54.108 [2024-10-30 17:09:37.068702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.364 [2024-10-30 17:09:37.162026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.364 [2024-10-30 17:09:37.162105] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:05:54.364 [2024-10-30 17:09:37.162122] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:05:54.364 [2024-10-30 17:09:37.162130] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:54.364 00:05:54.364 real 0m0.491s 00:05:54.364 user 0m0.291s 00:05:54.364 sys 0m0.096s 00:05:54.364 17:09:37 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:54.364 17:09:37 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:05:54.364 ************************************ 00:05:54.364 END TEST bdev_json_nonenclosed 00:05:54.364 ************************************ 00:05:54.621 17:09:37 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:54.621 17:09:37 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:05:54.621 17:09:37 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:54.621 17:09:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:54.621 ************************************ 00:05:54.621 START TEST bdev_json_nonarray 00:05:54.621 ************************************ 00:05:54.621 17:09:37 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:54.621 [2024-10-30 17:09:37.434507] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:05:54.621 [2024-10-30 17:09:37.434608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60527 ] 00:05:54.621 [2024-10-30 17:09:37.595398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.877 [2024-10-30 17:09:37.687890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.877 [2024-10-30 17:09:37.687974] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:05:54.877 [2024-10-30 17:09:37.687991] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:05:54.877 [2024-10-30 17:09:37.687999] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:55.134 00:05:55.134 real 0m0.488s 00:05:55.134 user 0m0.290s 00:05:55.134 sys 0m0.094s 00:05:55.134 17:09:37 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:55.134 17:09:37 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:05:55.134 ************************************ 00:05:55.134 END TEST bdev_json_nonarray 00:05:55.134 ************************************ 00:05:55.134 17:09:37 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:05:55.134 17:09:37 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:05:55.134 17:09:37 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:05:55.134 17:09:37 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:05:55.134 17:09:37 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:05:55.134 17:09:37 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:05:55.134 17:09:37 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:55.134 17:09:37 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:05:55.134 17:09:37 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:05:55.134 17:09:37 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:05:55.134 17:09:37 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:05:55.134 00:05:55.134 real 0m35.793s 00:05:55.134 user 0m55.062s 00:05:55.134 sys 0m5.089s 00:05:55.134 17:09:37 blockdev_nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:55.134 17:09:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:55.134 ************************************ 00:05:55.134 END TEST blockdev_nvme 00:05:55.134 ************************************ 00:05:55.134 17:09:37 -- spdk/autotest.sh@209 -- # uname -s 00:05:55.135 17:09:37 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:05:55.135 17:09:37 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:05:55.135 17:09:37 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:55.135 17:09:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:55.135 17:09:37 -- common/autotest_common.sh@10 -- # set +x 00:05:55.135 ************************************ 00:05:55.135 START TEST blockdev_nvme_gpt 00:05:55.135 ************************************ 00:05:55.135 17:09:37 blockdev_nvme_gpt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:05:55.135 * Looking for test storage... 00:05:55.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:55.135 17:09:38 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:55.135 17:09:38 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:55.135 17:09:38 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:05:55.135 17:09:38 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.135 17:09:38 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:05:55.135 17:09:38 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.135 17:09:38 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:55.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.135 --rc genhtml_branch_coverage=1 00:05:55.135 --rc genhtml_function_coverage=1 00:05:55.135 --rc genhtml_legend=1 00:05:55.135 --rc geninfo_all_blocks=1 00:05:55.135 --rc geninfo_unexecuted_blocks=1 00:05:55.135 00:05:55.135 ' 00:05:55.135 17:09:38 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:55.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.135 --rc genhtml_branch_coverage=1 00:05:55.135 --rc genhtml_function_coverage=1 00:05:55.135 --rc genhtml_legend=1 00:05:55.135 --rc geninfo_all_blocks=1 00:05:55.135 --rc geninfo_unexecuted_blocks=1 00:05:55.135 00:05:55.135 ' 00:05:55.135 17:09:38 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:55.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.135 --rc genhtml_branch_coverage=1 00:05:55.135 --rc genhtml_function_coverage=1 00:05:55.135 --rc genhtml_legend=1 00:05:55.135 --rc geninfo_all_blocks=1 00:05:55.135 --rc geninfo_unexecuted_blocks=1 00:05:55.135 00:05:55.135 ' 00:05:55.135 17:09:38 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:55.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.135 --rc genhtml_branch_coverage=1 00:05:55.135 --rc genhtml_function_coverage=1 00:05:55.135 --rc genhtml_legend=1 00:05:55.135 --rc geninfo_all_blocks=1 00:05:55.135 --rc geninfo_unexecuted_blocks=1 00:05:55.135 00:05:55.135 ' 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:05:55.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60611 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60611 00:05:55.135 17:09:38 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # '[' -z 60611 ']' 00:05:55.135 17:09:38 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.135 17:09:38 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:55.135 17:09:38 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:55.135 17:09:38 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.135 17:09:38 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:55.135 17:09:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:05:55.432 [2024-10-30 17:09:38.176616] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:05:55.432 [2024-10-30 17:09:38.176740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60611 ] 00:05:55.432 [2024-10-30 17:09:38.334965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.689 [2024-10-30 17:09:38.429776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.251 17:09:39 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:56.251 17:09:39 blockdev_nvme_gpt -- common/autotest_common.sh@866 -- # return 0 00:05:56.251 17:09:39 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:05:56.251 17:09:39 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:05:56.251 17:09:39 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:56.509 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:56.509 Waiting for block devices as requested 00:05:56.509 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:56.766 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:56.766 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:56.766 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:02.033 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:02.033 17:09:44 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:02.033 BYT; 00:06:02.033 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:02.033 BYT; 00:06:02.033 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:02.033 17:09:44 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:02.033 17:09:44 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:02.033 17:09:44 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:02.033 17:09:44 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:02.033 17:09:44 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:02.033 17:09:44 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:02.033 17:09:44 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:02.033 17:09:44 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:02.033 17:09:44 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:02.033 17:09:44 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:02.033 17:09:44 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:02.033 17:09:44 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:02.033 17:09:44 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:02.033 17:09:44 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:02.033 17:09:44 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:02.033 17:09:44 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:02.033 17:09:44 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:02.033 17:09:44 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:02.033 17:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:02.966 The operation has completed successfully. 00:06:02.966 17:09:45 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:04.339 The operation has completed successfully. 00:06:04.339 17:09:46 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:04.339 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:04.905 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:04.905 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:04.905 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:04.905 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:04.905 17:09:47 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:04.905 17:09:47 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.905 17:09:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:04.905 [] 00:06:04.905 17:09:47 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.905 17:09:47 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:04.905 17:09:47 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:04.905 17:09:47 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:04.905 17:09:47 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:05.163 17:09:47 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:05.163 17:09:47 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.163 17:09:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.422 17:09:48 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.422 17:09:48 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:05.422 17:09:48 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.422 17:09:48 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.422 17:09:48 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.422 17:09:48 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:05.422 17:09:48 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:05.422 17:09:48 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.422 17:09:48 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:05.422 17:09:48 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:05.422 17:09:48 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "2f61870e-a8e8-45f7-a17d-e35dc1e410ae"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2f61870e-a8e8-45f7-a17d-e35dc1e410ae",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "6d4a9631-5d88-4390-87a8-b6ee96988bfc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6d4a9631-5d88-4390-87a8-b6ee96988bfc",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "b3d17796-9b97-4ba0-9a48-bf6c6afdf274"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b3d17796-9b97-4ba0-9a48-bf6c6afdf274",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "8f4c561e-7c8e-46be-b730-3311182ce6cc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8f4c561e-7c8e-46be-b730-3311182ce6cc",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "737d638f-2c86-448c-9914-0ac877478c66"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "737d638f-2c86-448c-9914-0ac877478c66",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:05.422 17:09:48 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:05.422 17:09:48 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:05.422 17:09:48 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:05.422 17:09:48 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60611 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # '[' -z 60611 ']' 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # kill -0 60611 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # uname 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60611 00:06:05.422 killing process with pid 60611 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60611' 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@971 -- # kill 60611 00:06:05.422 17:09:48 blockdev_nvme_gpt -- common/autotest_common.sh@976 -- # wait 60611 00:06:06.795 17:09:49 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:06.795 17:09:49 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:06.795 17:09:49 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:06:06.795 17:09:49 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:06.795 17:09:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:06.795 ************************************ 00:06:06.795 START TEST bdev_hello_world 00:06:06.795 ************************************ 00:06:06.795 17:09:49 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:06.795 [2024-10-30 17:09:49.606008] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:06:06.795 [2024-10-30 17:09:49.606094] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61226 ] 00:06:06.795 [2024-10-30 17:09:49.760064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.054 [2024-10-30 17:09:49.857769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.634 [2024-10-30 17:09:50.398498] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:07.634 [2024-10-30 17:09:50.398546] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:07.634 [2024-10-30 17:09:50.398564] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:07.634 [2024-10-30 17:09:50.401002] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:07.634 [2024-10-30 17:09:50.401789] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:07.634 [2024-10-30 17:09:50.401819] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:07.634 [2024-10-30 17:09:50.402741] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:07.634 00:06:07.634 [2024-10-30 17:09:50.402768] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:08.199 00:06:08.199 real 0m1.480s 00:06:08.199 user 0m1.223s 00:06:08.199 sys 0m0.152s 00:06:08.199 17:09:51 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.199 ************************************ 00:06:08.199 END TEST bdev_hello_world 00:06:08.199 ************************************ 00:06:08.199 17:09:51 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:08.199 17:09:51 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:08.199 17:09:51 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:08.199 17:09:51 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.199 17:09:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:08.199 ************************************ 00:06:08.199 START TEST bdev_bounds 00:06:08.199 ************************************ 00:06:08.199 17:09:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:06:08.199 17:09:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61263 00:06:08.199 17:09:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.199 Process bdevio pid: 61263 00:06:08.199 17:09:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61263' 00:06:08.199 17:09:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61263 00:06:08.199 17:09:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 61263 ']' 00:06:08.199 17:09:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.199 17:09:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:08.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.199 17:09:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.199 17:09:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:08.199 17:09:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:08.199 17:09:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:08.199 [2024-10-30 17:09:51.154899] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:06:08.199 [2024-10-30 17:09:51.155018] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61263 ] 00:06:08.457 [2024-10-30 17:09:51.313308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.457 [2024-10-30 17:09:51.412717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.457 [2024-10-30 17:09:51.412962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.457 [2024-10-30 17:09:51.413037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.021 17:09:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:09.021 17:09:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:06:09.021 17:09:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:09.279 I/O targets: 00:06:09.279 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:09.279 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:09.279 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:09.279 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:09.279 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:09.279 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:09.279 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:09.279 00:06:09.279 00:06:09.279 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.279 http://cunit.sourceforge.net/ 00:06:09.279 00:06:09.279 00:06:09.279 Suite: bdevio tests on: Nvme3n1 00:06:09.279 Test: blockdev write read block ...passed 00:06:09.279 Test: blockdev write zeroes read block ...passed 00:06:09.279 Test: blockdev write zeroes read no split ...passed 00:06:09.279 Test: blockdev write zeroes read split ...passed 00:06:09.279 Test: blockdev write zeroes read split partial ...passed 00:06:09.279 Test: blockdev reset ...[2024-10-30 17:09:52.106155] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:09.279 [2024-10-30 17:09:52.108487] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:09.279 passed 00:06:09.279 Test: blockdev write read 8 blocks ...passed 00:06:09.279 Test: blockdev write read size > 128k ...passed 00:06:09.279 Test: blockdev write read invalid size ...passed 00:06:09.279 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:09.279 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:09.279 Test: blockdev write read max offset ...passed 00:06:09.279 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:09.279 Test: blockdev writev readv 8 blocks ...passed 00:06:09.279 Test: blockdev writev readv 30 x 1block ...passed 00:06:09.279 Test: blockdev writev readv block ...passed 00:06:09.279 Test: blockdev writev readv size > 128k ...passed 00:06:09.279 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:09.279 Test: blockdev comparev and writev ...[2024-10-30 17:09:52.114765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1c04000 len:0x1000 00:06:09.279 [2024-10-30 17:09:52.114806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:09.279 passed 00:06:09.279 Test: blockdev nvme passthru rw ...passed 00:06:09.279 Test: blockdev nvme passthru vendor specific ...[2024-10-30 17:09:52.115607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:09.279 [2024-10-30 17:09:52.115646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:09.279 passed 00:06:09.279 Test: blockdev nvme admin passthru ...passed 00:06:09.279 Test: blockdev copy ...passed 00:06:09.279 Suite: bdevio tests on: Nvme2n3 00:06:09.279 Test: blockdev write read block ...passed 00:06:09.280 Test: blockdev write zeroes read block ...passed 00:06:09.280 Test: blockdev write zeroes read no split ...passed 00:06:09.280 Test: blockdev write zeroes read split ...passed 00:06:09.280 Test: blockdev write zeroes read split partial ...passed 00:06:09.280 Test: blockdev reset ...[2024-10-30 17:09:52.158747] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:09.280 [2024-10-30 17:09:52.161398] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:09.280 passed 00:06:09.280 Test: blockdev write read 8 blocks ...passed 00:06:09.280 Test: blockdev write read size > 128k ...passed 00:06:09.280 Test: blockdev write read invalid size ...passed 00:06:09.280 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:09.280 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:09.280 Test: blockdev write read max offset ...passed 00:06:09.280 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:09.280 Test: blockdev writev readv 8 blocks ...passed 00:06:09.280 Test: blockdev writev readv 30 x 1block ...passed 00:06:09.280 Test: blockdev writev readv block ...passed 00:06:09.280 Test: blockdev writev readv size > 128k ...passed 00:06:09.280 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:09.280 Test: blockdev comparev and writev ...[2024-10-30 17:09:52.168037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1c02000 len:0x1000 00:06:09.280 passed 00:06:09.280 Test: blockdev nvme passthru rw ...[2024-10-30 17:09:52.168073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:09.280 passed 00:06:09.280 Test: blockdev nvme passthru vendor specific ...[2024-10-30 17:09:52.168626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:09.280 [2024-10-30 17:09:52.168648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:09.280 passed 00:06:09.280 Test: blockdev nvme admin passthru ...passed 00:06:09.280 Test: blockdev copy ...passed 00:06:09.280 Suite: bdevio tests on: Nvme2n2 00:06:09.280 Test: blockdev write read block ...passed 00:06:09.280 Test: blockdev write zeroes read block ...passed 00:06:09.280 Test: blockdev write zeroes read no split ...passed 00:06:09.280 Test: blockdev write zeroes read split ...passed 00:06:09.280 Test: blockdev write zeroes read split partial ...passed 00:06:09.280 Test: blockdev reset ...[2024-10-30 17:09:52.209634] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:09.280 [2024-10-30 17:09:52.212211] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:09.280 passed 00:06:09.280 Test: blockdev write read 8 blocks ...passed 00:06:09.280 Test: blockdev write read size > 128k ...passed 00:06:09.280 Test: blockdev write read invalid size ...passed 00:06:09.280 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:09.280 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:09.280 Test: blockdev write read max offset ...passed 00:06:09.280 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:09.280 Test: blockdev writev readv 8 blocks ...passed 00:06:09.280 Test: blockdev writev readv 30 x 1block ...passed 00:06:09.280 Test: blockdev writev readv block ...passed 00:06:09.280 Test: blockdev writev readv size > 128k ...passed 00:06:09.280 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:09.280 Test: blockdev comparev and writev ...[2024-10-30 17:09:52.218645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca438000 len:0x1000 00:06:09.280 [2024-10-30 17:09:52.218679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:09.280 passed 00:06:09.280 Test: blockdev nvme passthru rw ...passed 00:06:09.280 Test: blockdev nvme passthru vendor specific ...[2024-10-30 17:09:52.219270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:09.280 [2024-10-30 17:09:52.219293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:09.280 passed 00:06:09.280 Test: blockdev nvme admin passthru ...passed 00:06:09.280 Test: blockdev copy ...passed 00:06:09.280 Suite: bdevio tests on: Nvme2n1 00:06:09.280 Test: blockdev write read block ...passed 00:06:09.280 Test: blockdev write zeroes read block ...passed 00:06:09.280 Test: blockdev write zeroes read no split ...passed 00:06:09.280 Test: blockdev write zeroes read split ...passed 00:06:09.538 Test: blockdev write zeroes read split partial ...passed 00:06:09.538 Test: blockdev reset ...[2024-10-30 17:09:52.261309] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:09.538 [2024-10-30 17:09:52.263871] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:09.538 passed 00:06:09.538 Test: blockdev write read 8 blocks ...passed 00:06:09.538 Test: blockdev write read size > 128k ...passed 00:06:09.538 Test: blockdev write read invalid size ...passed 00:06:09.538 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:09.538 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:09.538 Test: blockdev write read max offset ...passed 00:06:09.538 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:09.538 Test: blockdev writev readv 8 blocks ...passed 00:06:09.538 Test: blockdev writev readv 30 x 1block ...passed 00:06:09.538 Test: blockdev writev readv block ...passed 00:06:09.538 Test: blockdev writev readv size > 128k ...passed 00:06:09.538 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:09.538 Test: blockdev comparev and writev ...[2024-10-30 17:09:52.270188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca434000 len:0x1000 00:06:09.538 [2024-10-30 17:09:52.270233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:09.538 passed 00:06:09.538 Test: blockdev nvme passthru rw ...passed 00:06:09.538 Test: blockdev nvme passthru vendor specific ...passed 00:06:09.538 Test: blockdev nvme admin passthru ...[2024-10-30 17:09:52.270716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:09.538 [2024-10-30 17:09:52.270734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:09.538 passed 00:06:09.538 Test: blockdev copy ...passed 00:06:09.538 Suite: bdevio tests on: Nvme1n1p2 00:06:09.538 Test: blockdev write read block ...passed 00:06:09.538 Test: blockdev write zeroes read block ...passed 00:06:09.538 Test: blockdev write zeroes read no split ...passed 00:06:09.538 Test: blockdev write zeroes read split ...passed 00:06:09.538 Test: blockdev write zeroes read split partial ...passed 00:06:09.538 Test: blockdev reset ...[2024-10-30 17:09:52.313792] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:09.538 [2024-10-30 17:09:52.316177] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:09.538 passed 00:06:09.538 Test: blockdev write read 8 blocks ...passed 00:06:09.538 Test: blockdev write read size > 128k ...passed 00:06:09.538 Test: blockdev write read invalid size ...passed 00:06:09.538 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:09.538 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:09.538 Test: blockdev write read max offset ...passed 00:06:09.538 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:09.538 Test: blockdev writev readv 8 blocks ...passed 00:06:09.538 Test: blockdev writev readv 30 x 1block ...passed 00:06:09.538 Test: blockdev writev readv block ...passed 00:06:09.538 Test: blockdev writev readv size > 128k ...passed 00:06:09.538 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:09.538 Test: blockdev comparev and writev ...[2024-10-30 17:09:52.322862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2ca430000 len:0x1000 00:06:09.538 [2024-10-30 17:09:52.322895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:09.538 passed 00:06:09.538 Test: blockdev nvme passthru rw ...passed 00:06:09.538 Test: blockdev nvme passthru vendor specific ...passed 00:06:09.538 Test: blockdev nvme admin passthru ...passed 00:06:09.538 Test: blockdev copy ...passed 00:06:09.538 Suite: bdevio tests on: Nvme1n1p1 00:06:09.539 Test: blockdev write read block ...passed 00:06:09.539 Test: blockdev write zeroes read block ...passed 00:06:09.539 Test: blockdev write zeroes read no split ...passed 00:06:09.539 Test: blockdev write zeroes read split ...passed 00:06:09.539 Test: blockdev write zeroes read split partial ...passed 00:06:09.539 Test: blockdev reset ...[2024-10-30 17:09:52.364049] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:09.539 [2024-10-30 17:09:52.366313] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:09.539 passed 00:06:09.539 Test: blockdev write read 8 blocks ...passed 00:06:09.539 Test: blockdev write read size > 128k ...passed 00:06:09.539 Test: blockdev write read invalid size ...passed 00:06:09.539 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:09.539 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:09.539 Test: blockdev write read max offset ...passed 00:06:09.539 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:09.539 Test: blockdev writev readv 8 blocks ...passed 00:06:09.539 Test: blockdev writev readv 30 x 1block ...passed 00:06:09.539 Test: blockdev writev readv block ...passed 00:06:09.539 Test: blockdev writev readv size > 128k ...passed 00:06:09.539 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:09.539 Test: blockdev comparev and writev ...[2024-10-30 17:09:52.372168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b260e000 len:0x1000 00:06:09.539 [2024-10-30 17:09:52.372207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:09.539 passed 00:06:09.539 Test: blockdev nvme passthru rw ...passed 00:06:09.539 Test: blockdev nvme passthru vendor specific ...passed 00:06:09.539 Test: blockdev nvme admin passthru ...passed 00:06:09.539 Test: blockdev copy ...passed 00:06:09.539 Suite: bdevio tests on: Nvme0n1 00:06:09.539 Test: blockdev write read block ...passed 00:06:09.539 Test: blockdev write zeroes read block ...passed 00:06:09.539 Test: blockdev write zeroes read no split ...passed 00:06:09.539 Test: blockdev write zeroes read split ...passed 00:06:09.539 Test: blockdev write zeroes read split partial ...passed 00:06:09.539 Test: blockdev reset ...[2024-10-30 17:09:52.414799] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:09.539 [2024-10-30 17:09:52.417050] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:09.539 passed 00:06:09.539 Test: blockdev write read 8 blocks ...passed 00:06:09.539 Test: blockdev write read size > 128k ...passed 00:06:09.539 Test: blockdev write read invalid size ...passed 00:06:09.539 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:09.539 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:09.539 Test: blockdev write read max offset ...passed 00:06:09.539 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:09.539 Test: blockdev writev readv 8 blocks ...passed 00:06:09.539 Test: blockdev writev readv 30 x 1block ...passed 00:06:09.539 Test: blockdev writev readv block ...passed 00:06:09.539 Test: blockdev writev readv size > 128k ...passed 00:06:09.539 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:09.539 Test: blockdev comparev and writev ...[2024-10-30 17:09:52.422548] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:09.539 separate metadata which is not supported yet. 00:06:09.539 passed 00:06:09.539 Test: blockdev nvme passthru rw ...passed 00:06:09.539 Test: blockdev nvme passthru vendor specific ...passed 00:06:09.539 Test: blockdev nvme admin passthru ...[2024-10-30 17:09:52.422933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:09.539 [2024-10-30 17:09:52.422962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:09.539 passed 00:06:09.539 Test: blockdev copy ...passed 00:06:09.539 00:06:09.539 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.539 suites 7 7 n/a 0 0 00:06:09.539 tests 161 161 161 0 0 00:06:09.539 asserts 1025 1025 1025 0 n/a 00:06:09.539 00:06:09.539 Elapsed time = 1.018 seconds 00:06:09.539 0 00:06:09.539 17:09:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61263 00:06:09.539 17:09:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 61263 ']' 00:06:09.539 17:09:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 61263 00:06:09.539 17:09:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:06:09.539 17:09:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:09.539 17:09:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61263 00:06:09.539 17:09:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:09.539 17:09:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:09.539 killing process with pid 61263 00:06:09.539 17:09:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61263' 00:06:09.539 17:09:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@971 -- # kill 61263 00:06:09.539 17:09:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@976 -- # wait 61263 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:10.473 00:06:10.473 real 0m2.027s 00:06:10.473 user 0m5.150s 00:06:10.473 sys 0m0.264s 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:10.473 ************************************ 00:06:10.473 END TEST bdev_bounds 00:06:10.473 ************************************ 00:06:10.473 17:09:53 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:10.473 17:09:53 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:10.473 17:09:53 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:10.473 17:09:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:10.473 ************************************ 00:06:10.473 START TEST bdev_nbd 00:06:10.473 ************************************ 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:10.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61319 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61319 /var/tmp/spdk-nbd.sock 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 61319 ']' 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:10.473 17:09:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:10.473 [2024-10-30 17:09:53.221892] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:06:10.473 [2024-10-30 17:09:53.221981] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:10.473 [2024-10-30 17:09:53.375954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.731 [2024-10-30 17:09:53.472434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.296 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:11.296 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:06:11.296 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:11.296 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.296 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:11.296 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:11.296 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:11.296 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.296 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:11.296 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:11.296 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:11.296 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:11.296 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:11.296 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:11.296 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:11.554 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:11.554 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:11.554 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:11.554 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:11.554 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:11.554 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:11.554 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:11.554 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:11.554 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:11.554 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:11.554 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:11.554 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:11.554 1+0 records in 00:06:11.554 1+0 records out 00:06:11.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000934416 s, 4.4 MB/s 00:06:11.554 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.554 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:11.554 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.554 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:11.554 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:11.554 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:11.554 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:11.554 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:11.811 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:11.811 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:11.811 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:11.811 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:11.811 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:11.811 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:11.811 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:11.811 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:11.811 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:11.811 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:11.811 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:11.812 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:11.812 1+0 records in 00:06:11.812 1+0 records out 00:06:11.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010322 s, 4.0 MB/s 00:06:11.812 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.812 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:11.812 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.812 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:11.812 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:11.812 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:11.812 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:11.812 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:11.812 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:12.069 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:12.069 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:12.069 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:06:12.069 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:12.069 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:12.069 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:12.069 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:06:12.069 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:12.069 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:12.069 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:12.069 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.069 1+0 records in 00:06:12.069 1+0 records out 00:06:12.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582403 s, 7.0 MB/s 00:06:12.069 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.069 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:12.069 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.069 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:12.069 17:09:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:12.069 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:12.069 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:12.069 17:09:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:12.069 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:12.069 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:12.069 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:12.069 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:06:12.069 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:12.069 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:12.069 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:12.069 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:06:12.069 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:12.069 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:12.069 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:12.069 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.069 1+0 records in 00:06:12.069 1+0 records out 00:06:12.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037837 s, 10.8 MB/s 00:06:12.069 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.069 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:12.069 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.069 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:12.069 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:12.069 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:12.069 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:12.069 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:12.327 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:12.327 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:12.327 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:12.327 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:06:12.327 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:12.327 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:12.327 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:12.327 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:06:12.327 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:12.327 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:12.327 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:12.327 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.327 1+0 records in 00:06:12.327 1+0 records out 00:06:12.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416513 s, 9.8 MB/s 00:06:12.327 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.327 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:12.327 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.327 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:12.327 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:12.327 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:12.327 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:12.327 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:12.586 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:12.586 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:12.586 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:12.586 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:06:12.586 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:12.586 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:12.586 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:12.586 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:06:12.586 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:12.586 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:12.586 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:12.586 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.586 1+0 records in 00:06:12.586 1+0 records out 00:06:12.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032545 s, 12.6 MB/s 00:06:12.586 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.586 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:12.586 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.586 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:12.586 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:12.586 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:12.586 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:12.586 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:12.847 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:12.847 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:12.847 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:12.848 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd6 00:06:12.848 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:12.848 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:12.848 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:12.848 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd6 /proc/partitions 00:06:12.848 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:12.848 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:12.848 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:12.848 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.848 1+0 records in 00:06:12.848 1+0 records out 00:06:12.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498378 s, 8.2 MB/s 00:06:12.848 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.848 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:12.848 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.848 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:12.848 17:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:12.848 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:12.848 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:12.848 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.108 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:13.108 { 00:06:13.108 "nbd_device": "/dev/nbd0", 00:06:13.108 "bdev_name": "Nvme0n1" 00:06:13.108 }, 00:06:13.108 { 00:06:13.108 "nbd_device": "/dev/nbd1", 00:06:13.108 "bdev_name": "Nvme1n1p1" 00:06:13.108 }, 00:06:13.108 { 00:06:13.108 "nbd_device": "/dev/nbd2", 00:06:13.108 "bdev_name": "Nvme1n1p2" 00:06:13.108 }, 00:06:13.108 { 00:06:13.108 "nbd_device": "/dev/nbd3", 00:06:13.108 "bdev_name": "Nvme2n1" 00:06:13.108 }, 00:06:13.108 { 00:06:13.108 "nbd_device": "/dev/nbd4", 00:06:13.108 "bdev_name": "Nvme2n2" 00:06:13.108 }, 00:06:13.108 { 00:06:13.108 "nbd_device": "/dev/nbd5", 00:06:13.108 "bdev_name": "Nvme2n3" 00:06:13.108 }, 00:06:13.108 { 00:06:13.108 "nbd_device": "/dev/nbd6", 00:06:13.108 "bdev_name": "Nvme3n1" 00:06:13.108 } 00:06:13.108 ]' 00:06:13.108 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:13.108 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:13.108 { 00:06:13.108 "nbd_device": "/dev/nbd0", 00:06:13.108 "bdev_name": "Nvme0n1" 00:06:13.108 }, 00:06:13.108 { 00:06:13.108 "nbd_device": "/dev/nbd1", 00:06:13.108 "bdev_name": "Nvme1n1p1" 00:06:13.108 }, 00:06:13.108 { 00:06:13.108 "nbd_device": "/dev/nbd2", 00:06:13.108 "bdev_name": "Nvme1n1p2" 00:06:13.108 }, 00:06:13.108 { 00:06:13.108 "nbd_device": "/dev/nbd3", 00:06:13.108 "bdev_name": "Nvme2n1" 00:06:13.108 }, 00:06:13.108 { 00:06:13.108 "nbd_device": "/dev/nbd4", 00:06:13.108 "bdev_name": "Nvme2n2" 00:06:13.108 }, 00:06:13.108 { 00:06:13.108 "nbd_device": "/dev/nbd5", 00:06:13.108 "bdev_name": "Nvme2n3" 00:06:13.108 }, 00:06:13.108 { 00:06:13.108 "nbd_device": "/dev/nbd6", 00:06:13.108 "bdev_name": "Nvme3n1" 00:06:13.108 } 00:06:13.108 ]' 00:06:13.108 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:13.108 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:13.108 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.108 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:13.108 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.108 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:13.108 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.108 17:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.367 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.367 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.367 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.367 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.367 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.367 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.367 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.367 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.367 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.367 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.625 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.625 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.625 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.625 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.625 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.625 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.625 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.625 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.625 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.625 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:13.882 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:13.882 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:13.882 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:13.882 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.882 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.882 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:13.882 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.882 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.882 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.882 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:13.882 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:13.882 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:13.882 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:13.882 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.882 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.882 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:13.882 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.882 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.882 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.883 17:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:14.140 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:14.140 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:14.140 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:14.140 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.140 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.140 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:14.140 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.140 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.140 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.140 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:14.398 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:14.398 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:14.398 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:14.398 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.398 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.398 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:14.398 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.398 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.398 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.398 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:14.655 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:14.655 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:14.655 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:14.655 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.655 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.655 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:14.655 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.655 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.655 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.655 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.655 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:14.913 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:15.171 /dev/nbd0 00:06:15.171 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.171 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.171 17:09:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:15.171 17:09:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:15.171 17:09:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:15.171 17:09:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:15.172 17:09:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:15.172 17:09:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:15.172 17:09:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:15.172 17:09:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:15.172 17:09:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.172 1+0 records in 00:06:15.172 1+0 records out 00:06:15.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378134 s, 10.8 MB/s 00:06:15.172 17:09:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.172 17:09:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:15.172 17:09:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.172 17:09:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:15.172 17:09:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:15.172 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.172 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:15.172 17:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:15.172 /dev/nbd1 00:06:15.172 17:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.172 17:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.172 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:15.172 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:15.172 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:15.172 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:15.172 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:15.172 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:15.172 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:15.172 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:15.172 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.429 1+0 records in 00:06:15.429 1+0 records out 00:06:15.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279561 s, 14.7 MB/s 00:06:15.429 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.429 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:15.429 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.429 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:15.429 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:15.429 17:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.429 17:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:15.430 17:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:15.430 /dev/nbd10 00:06:15.430 17:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:15.430 17:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:15.430 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:06:15.430 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:15.430 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:15.430 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:15.430 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:06:15.430 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:15.430 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:15.430 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:15.430 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.430 1+0 records in 00:06:15.430 1+0 records out 00:06:15.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481493 s, 8.5 MB/s 00:06:15.430 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.430 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:15.430 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.430 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:15.430 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:15.430 17:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.430 17:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:15.430 17:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:15.687 /dev/nbd11 00:06:15.687 17:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:15.687 17:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:15.687 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:06:15.687 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:15.687 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:15.687 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:15.687 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:06:15.687 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:15.687 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:15.687 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:15.687 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.687 1+0 records in 00:06:15.687 1+0 records out 00:06:15.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486263 s, 8.4 MB/s 00:06:15.687 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.687 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:15.687 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.688 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:15.688 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:15.688 17:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.688 17:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:15.688 17:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:15.946 /dev/nbd12 00:06:15.946 17:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:15.946 17:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:15.946 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:06:15.946 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:15.946 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:15.946 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:15.946 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:06:15.946 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:15.946 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:15.946 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:15.946 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.946 1+0 records in 00:06:15.946 1+0 records out 00:06:15.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511049 s, 8.0 MB/s 00:06:15.946 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.946 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:15.946 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.946 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:15.946 17:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:15.946 17:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.946 17:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:15.946 17:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:16.204 /dev/nbd13 00:06:16.204 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:16.204 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:16.204 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:06:16.204 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:16.204 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:16.204 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:16.204 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:06:16.204 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:16.204 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:16.204 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:16.204 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.204 1+0 records in 00:06:16.204 1+0 records out 00:06:16.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377378 s, 10.9 MB/s 00:06:16.204 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.204 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:16.204 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.204 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:16.204 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:16.204 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.204 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:16.204 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:16.461 /dev/nbd14 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd14 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd14 /proc/partitions 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.461 1+0 records in 00:06:16.461 1+0 records out 00:06:16.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048267 s, 8.5 MB/s 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.461 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.718 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.718 { 00:06:16.718 "nbd_device": "/dev/nbd0", 00:06:16.718 "bdev_name": "Nvme0n1" 00:06:16.718 }, 00:06:16.718 { 00:06:16.718 "nbd_device": "/dev/nbd1", 00:06:16.718 "bdev_name": "Nvme1n1p1" 00:06:16.718 }, 00:06:16.718 { 00:06:16.718 "nbd_device": "/dev/nbd10", 00:06:16.718 "bdev_name": "Nvme1n1p2" 00:06:16.718 }, 00:06:16.718 { 00:06:16.718 "nbd_device": "/dev/nbd11", 00:06:16.718 "bdev_name": "Nvme2n1" 00:06:16.718 }, 00:06:16.718 { 00:06:16.718 "nbd_device": "/dev/nbd12", 00:06:16.718 "bdev_name": "Nvme2n2" 00:06:16.718 }, 00:06:16.718 { 00:06:16.718 "nbd_device": "/dev/nbd13", 00:06:16.718 "bdev_name": "Nvme2n3" 00:06:16.718 }, 00:06:16.718 { 00:06:16.718 "nbd_device": "/dev/nbd14", 00:06:16.718 "bdev_name": "Nvme3n1" 00:06:16.718 } 00:06:16.718 ]' 00:06:16.718 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.718 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.718 { 00:06:16.718 "nbd_device": "/dev/nbd0", 00:06:16.718 "bdev_name": "Nvme0n1" 00:06:16.718 }, 00:06:16.718 { 00:06:16.718 "nbd_device": "/dev/nbd1", 00:06:16.718 "bdev_name": "Nvme1n1p1" 00:06:16.718 }, 00:06:16.718 { 00:06:16.718 "nbd_device": "/dev/nbd10", 00:06:16.718 "bdev_name": "Nvme1n1p2" 00:06:16.718 }, 00:06:16.718 { 00:06:16.718 "nbd_device": "/dev/nbd11", 00:06:16.718 "bdev_name": "Nvme2n1" 00:06:16.718 }, 00:06:16.718 { 00:06:16.718 "nbd_device": "/dev/nbd12", 00:06:16.718 "bdev_name": "Nvme2n2" 00:06:16.718 }, 00:06:16.718 { 00:06:16.718 "nbd_device": "/dev/nbd13", 00:06:16.718 "bdev_name": "Nvme2n3" 00:06:16.718 }, 00:06:16.718 { 00:06:16.718 "nbd_device": "/dev/nbd14", 00:06:16.718 "bdev_name": "Nvme3n1" 00:06:16.718 } 00:06:16.718 ]' 00:06:16.718 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.718 /dev/nbd1 00:06:16.718 /dev/nbd10 00:06:16.718 /dev/nbd11 00:06:16.718 /dev/nbd12 00:06:16.718 /dev/nbd13 00:06:16.718 /dev/nbd14' 00:06:16.718 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.718 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.718 /dev/nbd1 00:06:16.718 /dev/nbd10 00:06:16.718 /dev/nbd11 00:06:16.718 /dev/nbd12 00:06:16.718 /dev/nbd13 00:06:16.718 /dev/nbd14' 00:06:16.718 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:16.718 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:16.718 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:16.718 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:16.718 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:16.718 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:16.719 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.719 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.719 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:16.719 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.719 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:16.719 256+0 records in 00:06:16.719 256+0 records out 00:06:16.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121824 s, 86.1 MB/s 00:06:16.719 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.719 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.719 256+0 records in 00:06:16.719 256+0 records out 00:06:16.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0740553 s, 14.2 MB/s 00:06:16.719 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.719 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.975 256+0 records in 00:06:16.975 256+0 records out 00:06:16.976 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0742795 s, 14.1 MB/s 00:06:16.976 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.976 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:16.976 256+0 records in 00:06:16.976 256+0 records out 00:06:16.976 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0780089 s, 13.4 MB/s 00:06:16.976 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.976 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:16.976 256+0 records in 00:06:16.976 256+0 records out 00:06:16.976 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0761463 s, 13.8 MB/s 00:06:16.976 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.976 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:17.233 256+0 records in 00:06:17.233 256+0 records out 00:06:17.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0739832 s, 14.2 MB/s 00:06:17.233 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.233 17:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:17.233 256+0 records in 00:06:17.233 256+0 records out 00:06:17.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0727276 s, 14.4 MB/s 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:17.233 256+0 records in 00:06:17.233 256+0 records out 00:06:17.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0735957 s, 14.2 MB/s 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.233 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.491 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.491 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.491 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.491 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.491 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.491 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.491 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.491 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.491 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.491 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.749 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.749 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.749 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.749 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.749 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.749 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.749 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.749 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.749 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.749 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:18.007 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:18.007 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:18.007 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:18.007 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.007 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.007 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:18.007 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.007 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.007 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.007 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:18.007 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:18.264 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:18.264 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:18.264 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.264 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.264 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:18.264 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.264 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.264 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.264 17:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:18.264 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:18.264 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:18.264 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:18.264 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.264 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.264 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:18.264 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.264 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.264 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.264 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:18.522 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:18.522 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:18.522 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:18.522 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.522 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.522 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:18.522 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.522 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.522 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.523 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:18.781 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:19.039 malloc_lvol_verify 00:06:19.039 17:10:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:19.298 3e8d869d-b0e0-497e-9903-00c08f4682bc 00:06:19.298 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:19.555 d71b2062-8949-4f8b-b8d8-e4f349b5390d 00:06:19.555 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:19.812 /dev/nbd0 00:06:19.812 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:19.812 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:19.812 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:19.812 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:19.812 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:19.812 mke2fs 1.47.0 (5-Feb-2023) 00:06:19.812 Discarding device blocks: 0/4096 done 00:06:19.812 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:19.812 00:06:19.812 Allocating group tables: 0/1 done 00:06:19.812 Writing inode tables: 0/1 done 00:06:19.812 Creating journal (1024 blocks): done 00:06:19.812 Writing superblocks and filesystem accounting information: 0/1 done 00:06:19.812 00:06:19.812 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:19.812 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.812 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:19.812 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.812 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:19.812 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.812 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.812 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.812 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.812 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.812 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.812 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.813 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.813 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.813 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.813 17:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61319 00:06:19.813 17:10:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 61319 ']' 00:06:19.813 17:10:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 61319 00:06:19.813 17:10:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:06:19.813 17:10:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:19.813 17:10:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61319 00:06:20.070 17:10:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:20.070 killing process with pid 61319 00:06:20.070 17:10:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:20.071 17:10:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61319' 00:06:20.071 17:10:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@971 -- # kill 61319 00:06:20.071 17:10:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@976 -- # wait 61319 00:06:20.635 17:10:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:20.635 00:06:20.635 real 0m10.220s 00:06:20.635 user 0m14.642s 00:06:20.635 sys 0m3.428s 00:06:20.635 17:10:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:20.635 ************************************ 00:06:20.635 END TEST bdev_nbd 00:06:20.635 ************************************ 00:06:20.635 17:10:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:20.635 17:10:03 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:20.635 17:10:03 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:06:20.635 17:10:03 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:06:20.635 skipping fio tests on NVMe due to multi-ns failures. 00:06:20.635 17:10:03 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:20.635 17:10:03 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:20.635 17:10:03 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:20.635 17:10:03 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:06:20.635 17:10:03 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:20.635 17:10:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:20.635 ************************************ 00:06:20.635 START TEST bdev_verify 00:06:20.635 ************************************ 00:06:20.635 17:10:03 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:20.635 [2024-10-30 17:10:03.490005] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:06:20.635 [2024-10-30 17:10:03.490123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61729 ] 00:06:20.892 [2024-10-30 17:10:03.652220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.892 [2024-10-30 17:10:03.750966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.892 [2024-10-30 17:10:03.751072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.459 Running I/O for 5 seconds... 00:06:23.771 22784.00 IOPS, 89.00 MiB/s [2024-10-30T17:10:07.687Z] 24512.00 IOPS, 95.75 MiB/s [2024-10-30T17:10:08.621Z] 24810.67 IOPS, 96.92 MiB/s [2024-10-30T17:10:09.555Z] 25072.00 IOPS, 97.94 MiB/s [2024-10-30T17:10:09.556Z] 25395.20 IOPS, 99.20 MiB/s 00:06:26.575 Latency(us) 00:06:26.575 [2024-10-30T17:10:09.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:26.575 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:26.575 Verification LBA range: start 0x0 length 0xbd0bd 00:06:26.575 Nvme0n1 : 5.06 1771.40 6.92 0.00 0.00 72066.82 13913.80 79046.50 00:06:26.575 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:26.575 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:26.575 Nvme0n1 : 5.08 1814.75 7.09 0.00 0.00 69490.81 7864.32 67754.14 00:06:26.575 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:26.575 Verification LBA range: start 0x0 length 0x4ff80 00:06:26.575 Nvme1n1p1 : 5.06 1770.87 6.92 0.00 0.00 71972.12 15728.64 71383.83 00:06:26.575 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:26.575 Verification LBA range: start 0x4ff80 length 0x4ff80 00:06:26.575 Nvme1n1p1 : 5.04 1802.83 7.04 0.00 0.00 70788.70 13208.02 81062.99 00:06:26.575 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:26.575 Verification LBA range: start 0x0 length 0x4ff7f 00:06:26.575 Nvme1n1p2 : 5.06 1770.33 6.92 0.00 0.00 71879.04 15022.87 68560.74 00:06:26.575 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:26.575 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:06:26.575 Nvme1n1p2 : 5.04 1802.26 7.04 0.00 0.00 70631.34 14720.39 71383.83 00:06:26.575 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:26.575 Verification LBA range: start 0x0 length 0x80000 00:06:26.575 Nvme2n1 : 5.06 1769.82 6.91 0.00 0.00 71756.22 14317.10 61301.37 00:06:26.575 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:26.575 Verification LBA range: start 0x80000 length 0x80000 00:06:26.575 Nvme2n1 : 5.04 1801.76 7.04 0.00 0.00 70518.50 15325.34 65737.65 00:06:26.575 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:26.575 Verification LBA range: start 0x0 length 0x80000 00:06:26.575 Nvme2n2 : 5.06 1769.33 6.91 0.00 0.00 71606.76 13812.97 59688.17 00:06:26.575 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:26.575 Verification LBA range: start 0x80000 length 0x80000 00:06:26.575 Nvme2n2 : 5.07 1817.33 7.10 0.00 0.00 69835.09 7662.67 61704.66 00:06:26.575 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:26.575 Verification LBA range: start 0x0 length 0x80000 00:06:26.575 Nvme2n3 : 5.07 1778.63 6.95 0.00 0.00 71118.76 2470.20 61704.66 00:06:26.575 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:26.575 Verification LBA range: start 0x80000 length 0x80000 00:06:26.575 Nvme2n3 : 5.07 1816.83 7.10 0.00 0.00 69687.20 8065.97 63721.16 00:06:26.575 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:26.575 Verification LBA range: start 0x0 length 0x20000 00:06:26.575 Nvme3n1 : 5.08 1787.45 6.98 0.00 0.00 70679.45 6906.49 64931.05 00:06:26.575 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:26.575 Verification LBA range: start 0x20000 length 0x20000 00:06:26.575 Nvme3n1 : 5.07 1816.35 7.10 0.00 0.00 69570.90 8469.27 64931.05 00:06:26.575 [2024-10-30T17:10:09.556Z] =================================================================================================================== 00:06:26.575 [2024-10-30T17:10:09.556Z] Total : 25089.94 98.01 0.00 0.00 70818.61 2470.20 81062.99 00:06:27.947 00:06:27.947 real 0m7.303s 00:06:27.947 user 0m13.711s 00:06:27.947 sys 0m0.214s 00:06:27.947 17:10:10 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:27.947 17:10:10 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:27.947 ************************************ 00:06:27.947 END TEST bdev_verify 00:06:27.947 ************************************ 00:06:27.947 17:10:10 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:27.947 17:10:10 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:06:27.947 17:10:10 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.947 17:10:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:27.947 ************************************ 00:06:27.947 START TEST bdev_verify_big_io 00:06:27.947 ************************************ 00:06:27.947 17:10:10 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:27.947 [2024-10-30 17:10:10.831702] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:06:27.947 [2024-10-30 17:10:10.831823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61827 ] 00:06:28.205 [2024-10-30 17:10:10.992583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.205 [2024-10-30 17:10:11.091336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.205 [2024-10-30 17:10:11.091451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.139 Running I/O for 5 seconds... 00:06:33.796 1853.00 IOPS, 115.81 MiB/s [2024-10-30T17:10:18.153Z] 2693.00 IOPS, 168.31 MiB/s [2024-10-30T17:10:18.153Z] 3657.00 IOPS, 228.56 MiB/s 00:06:35.172 Latency(us) 00:06:35.172 [2024-10-30T17:10:18.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:35.172 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:35.172 Verification LBA range: start 0x0 length 0xbd0b 00:06:35.172 Nvme0n1 : 5.66 124.66 7.79 0.00 0.00 972904.62 14619.57 1206669.00 00:06:35.172 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:35.172 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:35.172 Nvme0n1 : 5.84 120.51 7.53 0.00 0.00 1004068.48 9628.75 1367988.38 00:06:35.172 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:35.172 Verification LBA range: start 0x0 length 0x4ff8 00:06:35.172 Nvme1n1p1 : 5.75 133.46 8.34 0.00 0.00 884493.92 80256.39 1032444.06 00:06:35.172 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:35.172 Verification LBA range: start 0x4ff8 length 0x4ff8 00:06:35.172 Nvme1n1p1 : 5.84 120.47 7.53 0.00 0.00 969476.80 101631.21 1167952.34 00:06:35.172 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:35.172 Verification LBA range: start 0x0 length 0x4ff7 00:06:35.172 Nvme1n1p2 : 5.76 138.98 8.69 0.00 0.00 838157.80 95985.03 871124.68 00:06:35.172 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:35.172 Verification LBA range: start 0x4ff7 length 0x4ff7 00:06:35.172 Nvme1n1p2 : 5.93 121.66 7.60 0.00 0.00 931573.58 84289.38 1135688.47 00:06:35.172 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:35.172 Verification LBA range: start 0x0 length 0x8000 00:06:35.172 Nvme2n1 : 5.83 141.80 8.86 0.00 0.00 797668.03 73400.32 890483.00 00:06:35.172 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:35.172 Verification LBA range: start 0x8000 length 0x8000 00:06:35.172 Nvme2n1 : 6.00 125.41 7.84 0.00 0.00 886464.50 67754.14 1426063.36 00:06:35.172 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:35.172 Verification LBA range: start 0x0 length 0x8000 00:06:35.172 Nvme2n2 : 5.94 150.83 9.43 0.00 0.00 731884.31 62107.96 819502.47 00:06:35.172 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:35.172 Verification LBA range: start 0x8000 length 0x8000 00:06:35.172 Nvme2n2 : 6.07 129.74 8.11 0.00 0.00 825507.42 61301.37 1677721.60 00:06:35.172 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:35.172 Verification LBA range: start 0x0 length 0x8000 00:06:35.172 Nvme2n3 : 6.00 159.97 10.00 0.00 0.00 670794.10 22988.01 838860.80 00:06:35.172 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:35.172 Verification LBA range: start 0x8000 length 0x8000 00:06:35.172 Nvme2n3 : 6.10 143.61 8.98 0.00 0.00 722666.07 14317.10 1690627.15 00:06:35.172 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:35.172 Verification LBA range: start 0x0 length 0x2000 00:06:35.172 Nvme3n1 : 6.07 179.62 11.23 0.00 0.00 580781.34 749.88 903388.55 00:06:35.172 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:35.172 Verification LBA range: start 0x2000 length 0x2000 00:06:35.172 Nvme3n1 : 6.22 206.64 12.91 0.00 0.00 491802.99 422.20 1529307.77 00:06:35.172 [2024-10-30T17:10:18.153Z] =================================================================================================================== 00:06:35.172 [2024-10-30T17:10:18.153Z] Total : 1997.36 124.84 0.00 0.00 780655.45 422.20 1690627.15 00:06:36.564 00:06:36.564 real 0m8.712s 00:06:36.564 user 0m16.414s 00:06:36.564 sys 0m0.210s 00:06:36.564 17:10:19 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:36.564 ************************************ 00:06:36.564 END TEST bdev_verify_big_io 00:06:36.564 ************************************ 00:06:36.564 17:10:19 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:36.564 17:10:19 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:36.564 17:10:19 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:06:36.564 17:10:19 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:36.564 17:10:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:36.564 ************************************ 00:06:36.564 START TEST bdev_write_zeroes 00:06:36.564 ************************************ 00:06:36.564 17:10:19 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:36.819 [2024-10-30 17:10:19.586722] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:06:36.819 [2024-10-30 17:10:19.586839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61936 ] 00:06:36.819 [2024-10-30 17:10:19.743190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.076 [2024-10-30 17:10:19.822072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.642 Running I/O for 1 seconds... 00:06:38.577 70336.00 IOPS, 274.75 MiB/s 00:06:38.577 Latency(us) 00:06:38.577 [2024-10-30T17:10:21.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:38.577 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:38.577 Nvme0n1 : 1.02 10022.61 39.15 0.00 0.00 12736.43 9931.22 23391.31 00:06:38.577 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:38.577 Nvme1n1p1 : 1.02 10010.21 39.10 0.00 0.00 12734.81 9628.75 23391.31 00:06:38.577 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:38.577 Nvme1n1p2 : 1.02 9997.88 39.05 0.00 0.00 12704.53 9679.16 22786.36 00:06:38.577 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:38.577 Nvme2n1 : 1.03 10019.32 39.14 0.00 0.00 12628.38 8418.86 22383.06 00:06:38.577 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:38.577 Nvme2n2 : 1.03 10008.06 39.09 0.00 0.00 12621.77 8267.62 22383.06 00:06:38.577 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:38.577 Nvme2n3 : 1.03 9980.30 38.99 0.00 0.00 12638.02 8065.97 22181.42 00:06:38.577 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:38.577 Nvme3n1 : 1.03 9969.07 38.94 0.00 0.00 12631.60 7914.73 23592.96 00:06:38.577 [2024-10-30T17:10:21.558Z] =================================================================================================================== 00:06:38.577 [2024-10-30T17:10:21.558Z] Total : 70007.45 273.47 0.00 0.00 12670.71 7914.73 23592.96 00:06:39.143 00:06:39.143 real 0m2.600s 00:06:39.143 user 0m2.316s 00:06:39.144 sys 0m0.172s 00:06:39.144 ************************************ 00:06:39.144 17:10:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.144 17:10:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:39.401 END TEST bdev_write_zeroes 00:06:39.401 ************************************ 00:06:39.401 17:10:22 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:39.401 17:10:22 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:06:39.401 17:10:22 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.401 17:10:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:39.401 ************************************ 00:06:39.401 START TEST bdev_json_nonenclosed 00:06:39.401 ************************************ 00:06:39.401 17:10:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:39.401 [2024-10-30 17:10:22.238689] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:06:39.401 [2024-10-30 17:10:22.238809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61989 ] 00:06:39.658 [2024-10-30 17:10:22.399878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.658 [2024-10-30 17:10:22.494719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.658 [2024-10-30 17:10:22.494800] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:39.658 [2024-10-30 17:10:22.494816] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:39.658 [2024-10-30 17:10:22.494826] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:39.915 00:06:39.915 real 0m0.495s 00:06:39.915 user 0m0.285s 00:06:39.915 sys 0m0.106s 00:06:39.915 17:10:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.915 17:10:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:39.915 ************************************ 00:06:39.915 END TEST bdev_json_nonenclosed 00:06:39.915 ************************************ 00:06:39.915 17:10:22 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:39.915 17:10:22 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:06:39.915 17:10:22 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.915 17:10:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:39.915 ************************************ 00:06:39.915 START TEST bdev_json_nonarray 00:06:39.915 ************************************ 00:06:39.915 17:10:22 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:39.915 [2024-10-30 17:10:22.773921] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:06:39.915 [2024-10-30 17:10:22.774039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62009 ] 00:06:40.175 [2024-10-30 17:10:22.927544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.175 [2024-10-30 17:10:23.024194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.175 [2024-10-30 17:10:23.024282] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:40.175 [2024-10-30 17:10:23.024300] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:40.175 [2024-10-30 17:10:23.024308] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.434 00:06:40.434 real 0m0.488s 00:06:40.434 user 0m0.300s 00:06:40.434 sys 0m0.084s 00:06:40.434 17:10:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:40.434 17:10:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:40.434 ************************************ 00:06:40.434 END TEST bdev_json_nonarray 00:06:40.434 ************************************ 00:06:40.434 17:10:23 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:06:40.434 17:10:23 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:06:40.434 17:10:23 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:06:40.434 17:10:23 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:40.434 17:10:23 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:40.434 17:10:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:40.434 ************************************ 00:06:40.434 START TEST bdev_gpt_uuid 00:06:40.434 ************************************ 00:06:40.434 17:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1127 -- # bdev_gpt_uuid 00:06:40.434 17:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:06:40.434 17:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:06:40.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.434 17:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62040 00:06:40.434 17:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:40.434 17:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62040 00:06:40.434 17:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # '[' -z 62040 ']' 00:06:40.434 17:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:40.434 17:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.434 17:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:40.434 17:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.434 17:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:40.435 17:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:40.435 [2024-10-30 17:10:23.300481] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:06:40.435 [2024-10-30 17:10:23.300573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62040 ] 00:06:40.693 [2024-10-30 17:10:23.455312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.693 [2024-10-30 17:10:23.548825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.258 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:41.258 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@866 -- # return 0 00:06:41.258 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:41.258 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.258 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:41.517 Some configs were skipped because the RPC state that can call them passed over. 00:06:41.517 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.517 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:06:41.517 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.517 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:41.517 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.517 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:06:41.517 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.517 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:41.517 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.517 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:06:41.517 { 00:06:41.517 "name": "Nvme1n1p1", 00:06:41.517 "aliases": [ 00:06:41.517 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:06:41.517 ], 00:06:41.517 "product_name": "GPT Disk", 00:06:41.517 "block_size": 4096, 00:06:41.517 "num_blocks": 655104, 00:06:41.517 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:06:41.517 "assigned_rate_limits": { 00:06:41.517 "rw_ios_per_sec": 0, 00:06:41.517 "rw_mbytes_per_sec": 0, 00:06:41.517 "r_mbytes_per_sec": 0, 00:06:41.517 "w_mbytes_per_sec": 0 00:06:41.517 }, 00:06:41.517 "claimed": false, 00:06:41.517 "zoned": false, 00:06:41.517 "supported_io_types": { 00:06:41.517 "read": true, 00:06:41.517 "write": true, 00:06:41.517 "unmap": true, 00:06:41.517 "flush": true, 00:06:41.517 "reset": true, 00:06:41.517 "nvme_admin": false, 00:06:41.517 "nvme_io": false, 00:06:41.517 "nvme_io_md": false, 00:06:41.517 "write_zeroes": true, 00:06:41.517 "zcopy": false, 00:06:41.517 "get_zone_info": false, 00:06:41.517 "zone_management": false, 00:06:41.517 "zone_append": false, 00:06:41.517 "compare": true, 00:06:41.517 "compare_and_write": false, 00:06:41.517 "abort": true, 00:06:41.517 "seek_hole": false, 00:06:41.517 "seek_data": false, 00:06:41.517 "copy": true, 00:06:41.517 "nvme_iov_md": false 00:06:41.517 }, 00:06:41.517 "driver_specific": { 00:06:41.517 "gpt": { 00:06:41.517 "base_bdev": "Nvme1n1", 00:06:41.517 "offset_blocks": 256, 00:06:41.517 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:06:41.517 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:06:41.517 "partition_name": "SPDK_TEST_first" 00:06:41.517 } 00:06:41.517 } 00:06:41.517 } 00:06:41.517 ]' 00:06:41.517 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:06:41.776 { 00:06:41.776 "name": "Nvme1n1p2", 00:06:41.776 "aliases": [ 00:06:41.776 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:06:41.776 ], 00:06:41.776 "product_name": "GPT Disk", 00:06:41.776 "block_size": 4096, 00:06:41.776 "num_blocks": 655103, 00:06:41.776 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:06:41.776 "assigned_rate_limits": { 00:06:41.776 "rw_ios_per_sec": 0, 00:06:41.776 "rw_mbytes_per_sec": 0, 00:06:41.776 "r_mbytes_per_sec": 0, 00:06:41.776 "w_mbytes_per_sec": 0 00:06:41.776 }, 00:06:41.776 "claimed": false, 00:06:41.776 "zoned": false, 00:06:41.776 "supported_io_types": { 00:06:41.776 "read": true, 00:06:41.776 "write": true, 00:06:41.776 "unmap": true, 00:06:41.776 "flush": true, 00:06:41.776 "reset": true, 00:06:41.776 "nvme_admin": false, 00:06:41.776 "nvme_io": false, 00:06:41.776 "nvme_io_md": false, 00:06:41.776 "write_zeroes": true, 00:06:41.776 "zcopy": false, 00:06:41.776 "get_zone_info": false, 00:06:41.776 "zone_management": false, 00:06:41.776 "zone_append": false, 00:06:41.776 "compare": true, 00:06:41.776 "compare_and_write": false, 00:06:41.776 "abort": true, 00:06:41.776 "seek_hole": false, 00:06:41.776 "seek_data": false, 00:06:41.776 "copy": true, 00:06:41.776 "nvme_iov_md": false 00:06:41.776 }, 00:06:41.776 "driver_specific": { 00:06:41.776 "gpt": { 00:06:41.776 "base_bdev": "Nvme1n1", 00:06:41.776 "offset_blocks": 655360, 00:06:41.776 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:06:41.776 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:06:41.776 "partition_name": "SPDK_TEST_second" 00:06:41.776 } 00:06:41.776 } 00:06:41.776 } 00:06:41.776 ]' 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62040 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # '[' -z 62040 ']' 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # kill -0 62040 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # uname 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62040 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:41.776 killing process with pid 62040 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62040' 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@971 -- # kill 62040 00:06:41.776 17:10:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@976 -- # wait 62040 00:06:43.680 00:06:43.680 real 0m2.935s 00:06:43.680 user 0m3.095s 00:06:43.680 sys 0m0.345s 00:06:43.680 17:10:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:43.680 ************************************ 00:06:43.680 END TEST bdev_gpt_uuid 00:06:43.680 ************************************ 00:06:43.680 17:10:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:43.680 17:10:26 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:06:43.680 17:10:26 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:43.680 17:10:26 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:06:43.680 17:10:26 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:43.680 17:10:26 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:43.680 17:10:26 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:06:43.680 17:10:26 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:06:43.680 17:10:26 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:06:43.680 17:10:26 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:43.680 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:43.680 Waiting for block devices as requested 00:06:43.938 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:43.938 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:43.938 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:43.938 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:49.266 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:49.266 17:10:31 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:06:49.266 17:10:31 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:06:49.266 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:49.266 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:49.266 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:49.266 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:49.266 17:10:32 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:06:49.266 00:06:49.266 real 0m54.247s 00:06:49.266 user 1m9.700s 00:06:49.266 sys 0m7.428s 00:06:49.266 17:10:32 blockdev_nvme_gpt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:49.266 17:10:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:49.266 ************************************ 00:06:49.266 END TEST blockdev_nvme_gpt 00:06:49.266 ************************************ 00:06:49.266 17:10:32 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:06:49.266 17:10:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:49.266 17:10:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:49.266 17:10:32 -- common/autotest_common.sh@10 -- # set +x 00:06:49.266 ************************************ 00:06:49.266 START TEST nvme 00:06:49.266 ************************************ 00:06:49.266 17:10:32 nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:06:49.524 * Looking for test storage... 00:06:49.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:06:49.524 17:10:32 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:49.524 17:10:32 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:06:49.524 17:10:32 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:49.524 17:10:32 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:49.524 17:10:32 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.524 17:10:32 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.524 17:10:32 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.524 17:10:32 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.524 17:10:32 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.524 17:10:32 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.524 17:10:32 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.524 17:10:32 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.524 17:10:32 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.524 17:10:32 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.524 17:10:32 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.524 17:10:32 nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:49.524 17:10:32 nvme -- scripts/common.sh@345 -- # : 1 00:06:49.524 17:10:32 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.524 17:10:32 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.524 17:10:32 nvme -- scripts/common.sh@365 -- # decimal 1 00:06:49.524 17:10:32 nvme -- scripts/common.sh@353 -- # local d=1 00:06:49.524 17:10:32 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.524 17:10:32 nvme -- scripts/common.sh@355 -- # echo 1 00:06:49.524 17:10:32 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.524 17:10:32 nvme -- scripts/common.sh@366 -- # decimal 2 00:06:49.524 17:10:32 nvme -- scripts/common.sh@353 -- # local d=2 00:06:49.524 17:10:32 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.524 17:10:32 nvme -- scripts/common.sh@355 -- # echo 2 00:06:49.524 17:10:32 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.524 17:10:32 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.524 17:10:32 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.524 17:10:32 nvme -- scripts/common.sh@368 -- # return 0 00:06:49.524 17:10:32 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.524 17:10:32 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:49.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.524 --rc genhtml_branch_coverage=1 00:06:49.524 --rc genhtml_function_coverage=1 00:06:49.524 --rc genhtml_legend=1 00:06:49.524 --rc geninfo_all_blocks=1 00:06:49.524 --rc geninfo_unexecuted_blocks=1 00:06:49.524 00:06:49.524 ' 00:06:49.524 17:10:32 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:49.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.524 --rc genhtml_branch_coverage=1 00:06:49.524 --rc genhtml_function_coverage=1 00:06:49.524 --rc genhtml_legend=1 00:06:49.524 --rc geninfo_all_blocks=1 00:06:49.524 --rc geninfo_unexecuted_blocks=1 00:06:49.524 00:06:49.524 ' 00:06:49.524 17:10:32 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:49.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.524 --rc genhtml_branch_coverage=1 00:06:49.524 --rc genhtml_function_coverage=1 00:06:49.524 --rc genhtml_legend=1 00:06:49.524 --rc geninfo_all_blocks=1 00:06:49.524 --rc geninfo_unexecuted_blocks=1 00:06:49.524 00:06:49.524 ' 00:06:49.524 17:10:32 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:49.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.524 --rc genhtml_branch_coverage=1 00:06:49.524 --rc genhtml_function_coverage=1 00:06:49.524 --rc genhtml_legend=1 00:06:49.524 --rc geninfo_all_blocks=1 00:06:49.524 --rc geninfo_unexecuted_blocks=1 00:06:49.524 00:06:49.524 ' 00:06:49.524 17:10:32 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:49.782 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:50.348 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:50.348 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:50.348 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:50.348 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:50.348 17:10:33 nvme -- nvme/nvme.sh@79 -- # uname 00:06:50.348 17:10:33 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:06:50.348 17:10:33 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:06:50.348 17:10:33 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:06:50.348 17:10:33 nvme -- common/autotest_common.sh@1084 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:06:50.348 17:10:33 nvme -- common/autotest_common.sh@1070 -- # _randomize_va_space=2 00:06:50.348 17:10:33 nvme -- common/autotest_common.sh@1071 -- # echo 0 00:06:50.348 17:10:33 nvme -- common/autotest_common.sh@1073 -- # stubpid=62671 00:06:50.348 Waiting for stub to ready for secondary processes... 00:06:50.348 17:10:33 nvme -- common/autotest_common.sh@1074 -- # echo Waiting for stub to ready for secondary processes... 00:06:50.348 17:10:33 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:06:50.348 17:10:33 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/62671 ]] 00:06:50.348 17:10:33 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:06:50.348 17:10:33 nvme -- common/autotest_common.sh@1072 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:06:50.605 [2024-10-30 17:10:33.351725] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:06:50.605 [2024-10-30 17:10:33.351817] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:06:51.171 [2024-10-30 17:10:34.062277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.430 [2024-10-30 17:10:34.154461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.430 [2024-10-30 17:10:34.154758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.430 [2024-10-30 17:10:34.154784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.430 [2024-10-30 17:10:34.167933] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:06:51.430 [2024-10-30 17:10:34.167971] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:06:51.430 [2024-10-30 17:10:34.177816] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:06:51.430 [2024-10-30 17:10:34.177897] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:06:51.430 [2024-10-30 17:10:34.179383] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:06:51.430 [2024-10-30 17:10:34.179516] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:06:51.430 [2024-10-30 17:10:34.179551] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:06:51.430 [2024-10-30 17:10:34.181669] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:06:51.430 [2024-10-30 17:10:34.181796] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:06:51.430 [2024-10-30 17:10:34.181851] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:06:51.430 [2024-10-30 17:10:34.183624] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:06:51.430 [2024-10-30 17:10:34.184033] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:06:51.430 [2024-10-30 17:10:34.184183] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:06:51.430 [2024-10-30 17:10:34.184342] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:06:51.430 [2024-10-30 17:10:34.184436] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:06:51.430 done. 00:06:51.430 17:10:34 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:06:51.430 17:10:34 nvme -- common/autotest_common.sh@1080 -- # echo done. 00:06:51.430 17:10:34 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:06:51.430 17:10:34 nvme -- common/autotest_common.sh@1103 -- # '[' 10 -le 1 ']' 00:06:51.430 17:10:34 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:51.430 17:10:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.430 ************************************ 00:06:51.430 START TEST nvme_reset 00:06:51.430 ************************************ 00:06:51.430 17:10:34 nvme.nvme_reset -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:06:51.689 Initializing NVMe Controllers 00:06:51.689 Skipping QEMU NVMe SSD at 0000:00:10.0 00:06:51.689 Skipping QEMU NVMe SSD at 0000:00:11.0 00:06:51.689 Skipping QEMU NVMe SSD at 0000:00:13.0 00:06:51.689 Skipping QEMU NVMe SSD at 0000:00:12.0 00:06:51.689 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:06:51.689 00:06:51.689 real 0m0.192s 00:06:51.689 user 0m0.061s 00:06:51.689 sys 0m0.090s 00:06:51.689 17:10:34 nvme.nvme_reset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:51.689 17:10:34 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:06:51.689 ************************************ 00:06:51.689 END TEST nvme_reset 00:06:51.689 ************************************ 00:06:51.689 17:10:34 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:06:51.689 17:10:34 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:51.689 17:10:34 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:51.689 17:10:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.689 ************************************ 00:06:51.689 START TEST nvme_identify 00:06:51.689 ************************************ 00:06:51.689 17:10:34 nvme.nvme_identify -- common/autotest_common.sh@1127 -- # nvme_identify 00:06:51.689 17:10:34 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:06:51.689 17:10:34 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:06:51.689 17:10:34 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:06:51.689 17:10:34 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:06:51.689 17:10:34 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:51.689 17:10:34 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:06:51.689 17:10:34 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:51.689 17:10:34 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:51.689 17:10:34 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:51.689 17:10:34 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:06:51.689 17:10:34 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:51.689 17:10:34 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:06:51.951 [2024-10-30 17:10:34.780234] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62692 terminated unexpected 00:06:51.951 ===================================================== 00:06:51.951 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:06:51.951 ===================================================== 00:06:51.951 Controller Capabilities/Features 00:06:51.951 ================================ 00:06:51.951 Vendor ID: 1b36 00:06:51.951 Subsystem Vendor ID: 1af4 00:06:51.951 Serial Number: 12340 00:06:51.951 Model Number: QEMU NVMe Ctrl 00:06:51.951 Firmware Version: 8.0.0 00:06:51.951 Recommended Arb Burst: 6 00:06:51.951 IEEE OUI Identifier: 00 54 52 00:06:51.951 Multi-path I/O 00:06:51.951 May have multiple subsystem ports: No 00:06:51.951 May have multiple controllers: No 00:06:51.951 Associated with SR-IOV VF: No 00:06:51.951 Max Data Transfer Size: 524288 00:06:51.951 Max Number of Namespaces: 256 00:06:51.951 Max Number of I/O Queues: 64 00:06:51.951 NVMe Specification Version (VS): 1.4 00:06:51.951 NVMe Specification Version (Identify): 1.4 00:06:51.951 Maximum Queue Entries: 2048 00:06:51.951 Contiguous Queues Required: Yes 00:06:51.951 Arbitration Mechanisms Supported 00:06:51.951 Weighted Round Robin: Not Supported 00:06:51.951 Vendor Specific: Not Supported 00:06:51.951 Reset Timeout: 7500 ms 00:06:51.951 Doorbell Stride: 4 bytes 00:06:51.951 NVM Subsystem Reset: Not Supported 00:06:51.951 Command Sets Supported 00:06:51.951 NVM Command Set: Supported 00:06:51.951 Boot Partition: Not Supported 00:06:51.951 Memory Page Size Minimum: 4096 bytes 00:06:51.951 Memory Page Size Maximum: 65536 bytes 00:06:51.951 Persistent Memory Region: Not Supported 00:06:51.951 Optional Asynchronous Events Supported 00:06:51.951 Namespace Attribute Notices: Supported 00:06:51.951 Firmware Activation Notices: Not Supported 00:06:51.951 ANA Change Notices: Not Supported 00:06:51.951 PLE Aggregate Log Change Notices: Not Supported 00:06:51.951 LBA Status Info Alert Notices: Not Supported 00:06:51.951 EGE Aggregate Log Change Notices: Not Supported 00:06:51.951 Normal NVM Subsystem Shutdown event: Not Supported 00:06:51.951 Zone Descriptor Change Notices: Not Supported 00:06:51.951 Discovery Log Change Notices: Not Supported 00:06:51.951 Controller Attributes 00:06:51.951 128-bit Host Identifier: Not Supported 00:06:51.951 Non-Operational Permissive Mode: Not Supported 00:06:51.951 NVM Sets: Not Supported 00:06:51.951 Read Recovery Levels: Not Supported 00:06:51.951 Endurance Groups: Not Supported 00:06:51.951 Predictable Latency Mode: Not Supported 00:06:51.951 Traffic Based Keep ALive: Not Supported 00:06:51.951 Namespace Granularity: Not Supported 00:06:51.951 SQ Associations: Not Supported 00:06:51.951 UUID List: Not Supported 00:06:51.951 Multi-Domain Subsystem: Not Supported 00:06:51.951 Fixed Capacity Management: Not Supported 00:06:51.951 Variable Capacity Management: Not Supported 00:06:51.951 Delete Endurance Group: Not Supported 00:06:51.951 Delete NVM Set: Not Supported 00:06:51.951 Extended LBA Formats Supported: Supported 00:06:51.951 Flexible Data Placement Supported: Not Supported 00:06:51.951 00:06:51.951 Controller Memory Buffer Support 00:06:51.951 ================================ 00:06:51.951 Supported: No 00:06:51.951 00:06:51.951 Persistent Memory Region Support 00:06:51.951 ================================ 00:06:51.951 Supported: No 00:06:51.951 00:06:51.951 Admin Command Set Attributes 00:06:51.951 ============================ 00:06:51.951 Security Send/Receive: Not Supported 00:06:51.951 Format NVM: Supported 00:06:51.951 Firmware Activate/Download: Not Supported 00:06:51.951 Namespace Management: Supported 00:06:51.951 Device Self-Test: Not Supported 00:06:51.951 Directives: Supported 00:06:51.951 NVMe-MI: Not Supported 00:06:51.951 Virtualization Management: Not Supported 00:06:51.951 Doorbell Buffer Config: Supported 00:06:51.951 Get LBA Status Capability: Not Supported 00:06:51.951 Command & Feature Lockdown Capability: Not Supported 00:06:51.951 Abort Command Limit: 4 00:06:51.951 Async Event Request Limit: 4 00:06:51.951 Number of Firmware Slots: N/A 00:06:51.951 Firmware Slot 1 Read-Only: N/A 00:06:51.951 Firmware Activation Without Reset: N/A 00:06:51.951 Multiple Update Detection Support: N/A 00:06:51.951 Firmware Update Granularity: No Information Provided 00:06:51.951 Per-Namespace SMART Log: Yes 00:06:51.951 Asymmetric Namespace Access Log Page: Not Supported 00:06:51.951 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:06:51.951 Command Effects Log Page: Supported 00:06:51.951 Get Log Page Extended Data: Supported 00:06:51.951 Telemetry Log Pages: Not Supported 00:06:51.951 Persistent Event Log Pages: Not Supported 00:06:51.951 Supported Log Pages Log Page: May Support 00:06:51.951 Commands Supported & Effects Log Page: Not Supported 00:06:51.951 Feature Identifiers & Effects Log Page:May Support 00:06:51.951 NVMe-MI Commands & Effects Log Page: May Support 00:06:51.951 Data Area 4 for Telemetry Log: Not Supported 00:06:51.951 Error Log Page Entries Supported: 1 00:06:51.951 Keep Alive: Not Supported 00:06:51.951 00:06:51.951 NVM Command Set Attributes 00:06:51.951 ========================== 00:06:51.951 Submission Queue Entry Size 00:06:51.951 Max: 64 00:06:51.951 Min: 64 00:06:51.951 Completion Queue Entry Size 00:06:51.951 Max: 16 00:06:51.951 Min: 16 00:06:51.951 Number of Namespaces: 256 00:06:51.951 Compare Command: Supported 00:06:51.951 Write Uncorrectable Command: Not Supported 00:06:51.951 Dataset Management Command: Supported 00:06:51.951 Write Zeroes Command: Supported 00:06:51.951 Set Features Save Field: Supported 00:06:51.951 Reservations: Not Supported 00:06:51.951 Timestamp: Supported 00:06:51.951 Copy: Supported 00:06:51.951 Volatile Write Cache: Present 00:06:51.951 Atomic Write Unit (Normal): 1 00:06:51.951 Atomic Write Unit (PFail): 1 00:06:51.951 Atomic Compare & Write Unit: 1 00:06:51.951 Fused Compare & Write: Not Supported 00:06:51.951 Scatter-Gather List 00:06:51.951 SGL Command Set: Supported 00:06:51.951 SGL Keyed: Not Supported 00:06:51.951 SGL Bit Bucket Descriptor: Not Supported 00:06:51.951 SGL Metadata Pointer: Not Supported 00:06:51.951 Oversized SGL: Not Supported 00:06:51.951 SGL Metadata Address: Not Supported 00:06:51.951 SGL Offset: Not Supported 00:06:51.951 Transport SGL Data Block: Not Supported 00:06:51.951 Replay Protected Memory Block: Not Supported 00:06:51.951 00:06:51.951 Firmware Slot Information 00:06:51.951 ========================= 00:06:51.951 Active slot: 1 00:06:51.951 Slot 1 Firmware Revision: 1.0 00:06:51.951 00:06:51.951 00:06:51.951 Commands Supported and Effects 00:06:51.951 ============================== 00:06:51.951 Admin Commands 00:06:51.951 -------------- 00:06:51.951 Delete I/O Submission Queue (00h): Supported 00:06:51.951 Create I/O Submission Queue (01h): Supported 00:06:51.951 Get Log Page (02h): Supported 00:06:51.951 Delete I/O Completion Queue (04h): Supported 00:06:51.951 Create I/O Completion Queue (05h): Supported 00:06:51.951 Identify (06h): Supported 00:06:51.951 Abort (08h): Supported 00:06:51.951 Set Features (09h): Supported 00:06:51.951 Get Features (0Ah): Supported 00:06:51.951 Asynchronous Event Request (0Ch): Supported 00:06:51.951 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:51.951 Directive Send (19h): Supported 00:06:51.951 Directive Receive (1Ah): Supported 00:06:51.951 Virtualization Management (1Ch): Supported 00:06:51.951 Doorbell Buffer Config (7Ch): Supported 00:06:51.951 Format NVM (80h): Supported LBA-Change 00:06:51.951 I/O Commands 00:06:51.951 ------------ 00:06:51.951 Flush (00h): Supported LBA-Change 00:06:51.951 Write (01h): Supported LBA-Change 00:06:51.951 Read (02h): Supported 00:06:51.951 Compare (05h): Supported 00:06:51.951 Write Zeroes (08h): Supported LBA-Change 00:06:51.951 Dataset Management (09h): Supported LBA-Change 00:06:51.951 Unknown (0Ch): Supported 00:06:51.951 Unknown (12h): Supported 00:06:51.951 Copy (19h): Supported LBA-Change 00:06:51.951 Unknown (1Dh): Supported LBA-Change 00:06:51.951 00:06:51.951 Error Log 00:06:51.951 ========= 00:06:51.951 00:06:51.951 Arbitration 00:06:51.951 =========== 00:06:51.951 Arbitration Burst: no limit 00:06:51.951 00:06:51.951 Power Management 00:06:51.951 ================ 00:06:51.951 Number of Power States: 1 00:06:51.951 Current Power State: Power State #0 00:06:51.951 Power State #0: 00:06:51.952 Max Power: 25.00 W 00:06:51.952 Non-Operational State: Operational 00:06:51.952 Entry Latency: 16 microseconds 00:06:51.952 Exit Latency: 4 microseconds 00:06:51.952 Relative Read Throughput: 0 00:06:51.952 Relative Read Latency: 0 00:06:51.952 Relative Write Throughput: 0 00:06:51.952 Relative Write Latency: 0 00:06:51.952 Idle Power[2024-10-30 17:10:34.781484] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62692 terminated unexpected 00:06:51.952 : Not Reported 00:06:51.952 Active Power: Not Reported 00:06:51.952 Non-Operational Permissive Mode: Not Supported 00:06:51.952 00:06:51.952 Health Information 00:06:51.952 ================== 00:06:51.952 Critical Warnings: 00:06:51.952 Available Spare Space: OK 00:06:51.952 Temperature: OK 00:06:51.952 Device Reliability: OK 00:06:51.952 Read Only: No 00:06:51.952 Volatile Memory Backup: OK 00:06:51.952 Current Temperature: 323 Kelvin (50 Celsius) 00:06:51.952 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:51.952 Available Spare: 0% 00:06:51.952 Available Spare Threshold: 0% 00:06:51.952 Life Percentage Used: 0% 00:06:51.952 Data Units Read: 732 00:06:51.952 Data Units Written: 660 00:06:51.952 Host Read Commands: 38619 00:06:51.952 Host Write Commands: 38405 00:06:51.952 Controller Busy Time: 0 minutes 00:06:51.952 Power Cycles: 0 00:06:51.952 Power On Hours: 0 hours 00:06:51.952 Unsafe Shutdowns: 0 00:06:51.952 Unrecoverable Media Errors: 0 00:06:51.952 Lifetime Error Log Entries: 0 00:06:51.952 Warning Temperature Time: 0 minutes 00:06:51.952 Critical Temperature Time: 0 minutes 00:06:51.952 00:06:51.952 Number of Queues 00:06:51.952 ================ 00:06:51.952 Number of I/O Submission Queues: 64 00:06:51.952 Number of I/O Completion Queues: 64 00:06:51.952 00:06:51.952 ZNS Specific Controller Data 00:06:51.952 ============================ 00:06:51.952 Zone Append Size Limit: 0 00:06:51.952 00:06:51.952 00:06:51.952 Active Namespaces 00:06:51.952 ================= 00:06:51.952 Namespace ID:1 00:06:51.952 Error Recovery Timeout: Unlimited 00:06:51.952 Command Set Identifier: NVM (00h) 00:06:51.952 Deallocate: Supported 00:06:51.952 Deallocated/Unwritten Error: Supported 00:06:51.952 Deallocated Read Value: All 0x00 00:06:51.952 Deallocate in Write Zeroes: Not Supported 00:06:51.952 Deallocated Guard Field: 0xFFFF 00:06:51.952 Flush: Supported 00:06:51.952 Reservation: Not Supported 00:06:51.952 Metadata Transferred as: Separate Metadata Buffer 00:06:51.952 Namespace Sharing Capabilities: Private 00:06:51.952 Size (in LBAs): 1548666 (5GiB) 00:06:51.952 Capacity (in LBAs): 1548666 (5GiB) 00:06:51.952 Utilization (in LBAs): 1548666 (5GiB) 00:06:51.952 Thin Provisioning: Not Supported 00:06:51.952 Per-NS Atomic Units: No 00:06:51.952 Maximum Single Source Range Length: 128 00:06:51.952 Maximum Copy Length: 128 00:06:51.952 Maximum Source Range Count: 128 00:06:51.952 NGUID/EUI64 Never Reused: No 00:06:51.952 Namespace Write Protected: No 00:06:51.952 Number of LBA Formats: 8 00:06:51.952 Current LBA Format: LBA Format #07 00:06:51.952 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:51.952 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:51.952 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:51.952 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:51.952 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:51.952 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:51.952 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:51.952 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:51.952 00:06:51.952 NVM Specific Namespace Data 00:06:51.952 =========================== 00:06:51.952 Logical Block Storage Tag Mask: 0 00:06:51.952 Protection Information Capabilities: 00:06:51.952 16b Guard Protection Information Storage Tag Support: No 00:06:51.952 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:51.952 Storage Tag Check Read Support: No 00:06:51.952 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.952 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.952 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.952 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.952 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.952 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.952 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.952 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.952 ===================================================== 00:06:51.952 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:06:51.952 ===================================================== 00:06:51.952 Controller Capabilities/Features 00:06:51.952 ================================ 00:06:51.952 Vendor ID: 1b36 00:06:51.952 Subsystem Vendor ID: 1af4 00:06:51.952 Serial Number: 12341 00:06:51.952 Model Number: QEMU NVMe Ctrl 00:06:51.952 Firmware Version: 8.0.0 00:06:51.952 Recommended Arb Burst: 6 00:06:51.952 IEEE OUI Identifier: 00 54 52 00:06:51.952 Multi-path I/O 00:06:51.952 May have multiple subsystem ports: No 00:06:51.952 May have multiple controllers: No 00:06:51.952 Associated with SR-IOV VF: No 00:06:51.952 Max Data Transfer Size: 524288 00:06:51.952 Max Number of Namespaces: 256 00:06:51.952 Max Number of I/O Queues: 64 00:06:51.952 NVMe Specification Version (VS): 1.4 00:06:51.952 NVMe Specification Version (Identify): 1.4 00:06:51.952 Maximum Queue Entries: 2048 00:06:51.952 Contiguous Queues Required: Yes 00:06:51.952 Arbitration Mechanisms Supported 00:06:51.952 Weighted Round Robin: Not Supported 00:06:51.952 Vendor Specific: Not Supported 00:06:51.952 Reset Timeout: 7500 ms 00:06:51.952 Doorbell Stride: 4 bytes 00:06:51.952 NVM Subsystem Reset: Not Supported 00:06:51.952 Command Sets Supported 00:06:51.952 NVM Command Set: Supported 00:06:51.952 Boot Partition: Not Supported 00:06:51.952 Memory Page Size Minimum: 4096 bytes 00:06:51.952 Memory Page Size Maximum: 65536 bytes 00:06:51.952 Persistent Memory Region: Not Supported 00:06:51.952 Optional Asynchronous Events Supported 00:06:51.952 Namespace Attribute Notices: Supported 00:06:51.952 Firmware Activation Notices: Not Supported 00:06:51.952 ANA Change Notices: Not Supported 00:06:51.952 PLE Aggregate Log Change Notices: Not Supported 00:06:51.952 LBA Status Info Alert Notices: Not Supported 00:06:51.952 EGE Aggregate Log Change Notices: Not Supported 00:06:51.952 Normal NVM Subsystem Shutdown event: Not Supported 00:06:51.952 Zone Descriptor Change Notices: Not Supported 00:06:51.952 Discovery Log Change Notices: Not Supported 00:06:51.952 Controller Attributes 00:06:51.952 128-bit Host Identifier: Not Supported 00:06:51.952 Non-Operational Permissive Mode: Not Supported 00:06:51.952 NVM Sets: Not Supported 00:06:51.952 Read Recovery Levels: Not Supported 00:06:51.952 Endurance Groups: Not Supported 00:06:51.952 Predictable Latency Mode: Not Supported 00:06:51.952 Traffic Based Keep ALive: Not Supported 00:06:51.952 Namespace Granularity: Not Supported 00:06:51.952 SQ Associations: Not Supported 00:06:51.952 UUID List: Not Supported 00:06:51.952 Multi-Domain Subsystem: Not Supported 00:06:51.952 Fixed Capacity Management: Not Supported 00:06:51.952 Variable Capacity Management: Not Supported 00:06:51.952 Delete Endurance Group: Not Supported 00:06:51.952 Delete NVM Set: Not Supported 00:06:51.952 Extended LBA Formats Supported: Supported 00:06:51.952 Flexible Data Placement Supported: Not Supported 00:06:51.952 00:06:51.952 Controller Memory Buffer Support 00:06:51.952 ================================ 00:06:51.952 Supported: No 00:06:51.952 00:06:51.952 Persistent Memory Region Support 00:06:51.952 ================================ 00:06:51.952 Supported: No 00:06:51.952 00:06:51.952 Admin Command Set Attributes 00:06:51.952 ============================ 00:06:51.952 Security Send/Receive: Not Supported 00:06:51.952 Format NVM: Supported 00:06:51.952 Firmware Activate/Download: Not Supported 00:06:51.952 Namespace Management: Supported 00:06:51.952 Device Self-Test: Not Supported 00:06:51.952 Directives: Supported 00:06:51.952 NVMe-MI: Not Supported 00:06:51.952 Virtualization Management: Not Supported 00:06:51.952 Doorbell Buffer Config: Supported 00:06:51.952 Get LBA Status Capability: Not Supported 00:06:51.952 Command & Feature Lockdown Capability: Not Supported 00:06:51.952 Abort Command Limit: 4 00:06:51.952 Async Event Request Limit: 4 00:06:51.952 Number of Firmware Slots: N/A 00:06:51.952 Firmware Slot 1 Read-Only: N/A 00:06:51.952 Firmware Activation Without Reset: N/A 00:06:51.952 Multiple Update Detection Support: N/A 00:06:51.952 Firmware Update Granularity: No Information Provided 00:06:51.952 Per-Namespace SMART Log: Yes 00:06:51.952 Asymmetric Namespace Access Log Page: Not Supported 00:06:51.952 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:06:51.952 Command Effects Log Page: Supported 00:06:51.952 Get Log Page Extended Data: Supported 00:06:51.952 Telemetry Log Pages: Not Supported 00:06:51.952 Persistent Event Log Pages: Not Supported 00:06:51.952 Supported Log Pages Log Page: May Support 00:06:51.952 Commands Supported & Effects Log Page: Not Supported 00:06:51.952 Feature Identifiers & Effects Log Page:May Support 00:06:51.953 NVMe-MI Commands & Effects Log Page: May Support 00:06:51.953 Data Area 4 for Telemetry Log: Not Supported 00:06:51.953 Error Log Page Entries Supported: 1 00:06:51.953 Keep Alive: Not Supported 00:06:51.953 00:06:51.953 NVM Command Set Attributes 00:06:51.953 ========================== 00:06:51.953 Submission Queue Entry Size 00:06:51.953 Max: 64 00:06:51.953 Min: 64 00:06:51.953 Completion Queue Entry Size 00:06:51.953 Max: 16 00:06:51.953 Min: 16 00:06:51.953 Number of Namespaces: 256 00:06:51.953 Compare Command: Supported 00:06:51.953 Write Uncorrectable Command: Not Supported 00:06:51.953 Dataset Management Command: Supported 00:06:51.953 Write Zeroes Command: Supported 00:06:51.953 Set Features Save Field: Supported 00:06:51.953 Reservations: Not Supported 00:06:51.953 Timestamp: Supported 00:06:51.953 Copy: Supported 00:06:51.953 Volatile Write Cache: Present 00:06:51.953 Atomic Write Unit (Normal): 1 00:06:51.953 Atomic Write Unit (PFail): 1 00:06:51.953 Atomic Compare & Write Unit: 1 00:06:51.953 Fused Compare & Write: Not Supported 00:06:51.953 Scatter-Gather List 00:06:51.953 SGL Command Set: Supported 00:06:51.953 SGL Keyed: Not Supported 00:06:51.953 SGL Bit Bucket Descriptor: Not Supported 00:06:51.953 SGL Metadata Pointer: Not Supported 00:06:51.953 Oversized SGL: Not Supported 00:06:51.953 SGL Metadata Address: Not Supported 00:06:51.953 SGL Offset: Not Supported 00:06:51.953 Transport SGL Data Block: Not Supported 00:06:51.953 Replay Protected Memory Block: Not Supported 00:06:51.953 00:06:51.953 Firmware Slot Information 00:06:51.953 ========================= 00:06:51.953 Active slot: 1 00:06:51.953 Slot 1 Firmware Revision: 1.0 00:06:51.953 00:06:51.953 00:06:51.953 Commands Supported and Effects 00:06:51.953 ============================== 00:06:51.953 Admin Commands 00:06:51.953 -------------- 00:06:51.953 Delete I/O Submission Queue (00h): Supported 00:06:51.953 Create I/O Submission Queue (01h): Supported 00:06:51.953 Get Log Page (02h): Supported 00:06:51.953 Delete I/O Completion Queue (04h): Supported 00:06:51.953 Create I/O Completion Queue (05h): Supported 00:06:51.953 Identify (06h): Supported 00:06:51.953 Abort (08h): Supported 00:06:51.953 Set Features (09h): Supported 00:06:51.953 Get Features (0Ah): Supported 00:06:51.953 Asynchronous Event Request (0Ch): Supported 00:06:51.953 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:51.953 Directive Send (19h): Supported 00:06:51.953 Directive Receive (1Ah): Supported 00:06:51.953 Virtualization Management (1Ch): Supported 00:06:51.953 Doorbell Buffer Config (7Ch): Supported 00:06:51.953 Format NVM (80h): Supported LBA-Change 00:06:51.953 I/O Commands 00:06:51.953 ------------ 00:06:51.953 Flush (00h): Supported LBA-Change 00:06:51.953 Write (01h): Supported LBA-Change 00:06:51.953 Read (02h): Supported 00:06:51.953 Compare (05h): Supported 00:06:51.953 Write Zeroes (08h): Supported LBA-Change 00:06:51.953 Dataset Management (09h): Supported LBA-Change 00:06:51.953 Unknown (0Ch): Supported 00:06:51.953 Unknown (12h): Supported 00:06:51.953 Copy (19h): Supported LBA-Change 00:06:51.953 Unknown (1Dh): Supported LBA-Change 00:06:51.953 00:06:51.953 Error Log 00:06:51.953 ========= 00:06:51.953 00:06:51.953 Arbitration 00:06:51.953 =========== 00:06:51.953 Arbitration Burst: no limit 00:06:51.953 00:06:51.953 Power Management 00:06:51.953 ================ 00:06:51.953 Number of Power States: 1 00:06:51.953 Current Power State: Power State #0 00:06:51.953 Power State #0: 00:06:51.953 Max Power: 25.00 W 00:06:51.953 Non-Operational State: Operational 00:06:51.953 Entry Latency: 16 microseconds 00:06:51.953 Exit Latency: 4 microseconds 00:06:51.953 Relative Read Throughput: 0 00:06:51.953 Relative Read Latency: 0 00:06:51.953 Relative Write Throughput: 0 00:06:51.953 Relative Write Latency: 0 00:06:51.953 Idle Power: Not Reported 00:06:51.953 Active Power: Not Reported 00:06:51.953 Non-Operational Permissive Mode: Not Supported 00:06:51.953 00:06:51.953 Health Information 00:06:51.953 ================== 00:06:51.953 Critical Warnings: 00:06:51.953 Available Spare Space: OK 00:06:51.953 Temperature: [2024-10-30 17:10:34.782159] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62692 terminated unexpected 00:06:51.953 OK 00:06:51.953 Device Reliability: OK 00:06:51.953 Read Only: No 00:06:51.953 Volatile Memory Backup: OK 00:06:51.953 Current Temperature: 323 Kelvin (50 Celsius) 00:06:51.953 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:51.953 Available Spare: 0% 00:06:51.953 Available Spare Threshold: 0% 00:06:51.953 Life Percentage Used: 0% 00:06:51.953 Data Units Read: 1141 00:06:51.953 Data Units Written: 1014 00:06:51.953 Host Read Commands: 59081 00:06:51.953 Host Write Commands: 57968 00:06:51.953 Controller Busy Time: 0 minutes 00:06:51.953 Power Cycles: 0 00:06:51.953 Power On Hours: 0 hours 00:06:51.953 Unsafe Shutdowns: 0 00:06:51.953 Unrecoverable Media Errors: 0 00:06:51.953 Lifetime Error Log Entries: 0 00:06:51.953 Warning Temperature Time: 0 minutes 00:06:51.953 Critical Temperature Time: 0 minutes 00:06:51.953 00:06:51.953 Number of Queues 00:06:51.953 ================ 00:06:51.953 Number of I/O Submission Queues: 64 00:06:51.953 Number of I/O Completion Queues: 64 00:06:51.953 00:06:51.953 ZNS Specific Controller Data 00:06:51.953 ============================ 00:06:51.953 Zone Append Size Limit: 0 00:06:51.953 00:06:51.953 00:06:51.953 Active Namespaces 00:06:51.953 ================= 00:06:51.953 Namespace ID:1 00:06:51.953 Error Recovery Timeout: Unlimited 00:06:51.953 Command Set Identifier: NVM (00h) 00:06:51.953 Deallocate: Supported 00:06:51.953 Deallocated/Unwritten Error: Supported 00:06:51.953 Deallocated Read Value: All 0x00 00:06:51.953 Deallocate in Write Zeroes: Not Supported 00:06:51.953 Deallocated Guard Field: 0xFFFF 00:06:51.953 Flush: Supported 00:06:51.953 Reservation: Not Supported 00:06:51.953 Namespace Sharing Capabilities: Private 00:06:51.953 Size (in LBAs): 1310720 (5GiB) 00:06:51.953 Capacity (in LBAs): 1310720 (5GiB) 00:06:51.953 Utilization (in LBAs): 1310720 (5GiB) 00:06:51.953 Thin Provisioning: Not Supported 00:06:51.953 Per-NS Atomic Units: No 00:06:51.953 Maximum Single Source Range Length: 128 00:06:51.953 Maximum Copy Length: 128 00:06:51.953 Maximum Source Range Count: 128 00:06:51.953 NGUID/EUI64 Never Reused: No 00:06:51.953 Namespace Write Protected: No 00:06:51.953 Number of LBA Formats: 8 00:06:51.953 Current LBA Format: LBA Format #04 00:06:51.953 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:51.953 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:51.953 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:51.953 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:51.953 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:51.953 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:51.953 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:51.953 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:51.953 00:06:51.953 NVM Specific Namespace Data 00:06:51.953 =========================== 00:06:51.953 Logical Block Storage Tag Mask: 0 00:06:51.953 Protection Information Capabilities: 00:06:51.953 16b Guard Protection Information Storage Tag Support: No 00:06:51.953 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:51.953 Storage Tag Check Read Support: No 00:06:51.953 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.953 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.953 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.953 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.953 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.953 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.953 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.953 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.953 ===================================================== 00:06:51.953 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:06:51.953 ===================================================== 00:06:51.953 Controller Capabilities/Features 00:06:51.953 ================================ 00:06:51.953 Vendor ID: 1b36 00:06:51.953 Subsystem Vendor ID: 1af4 00:06:51.953 Serial Number: 12343 00:06:51.953 Model Number: QEMU NVMe Ctrl 00:06:51.953 Firmware Version: 8.0.0 00:06:51.953 Recommended Arb Burst: 6 00:06:51.953 IEEE OUI Identifier: 00 54 52 00:06:51.953 Multi-path I/O 00:06:51.953 May have multiple subsystem ports: No 00:06:51.953 May have multiple controllers: Yes 00:06:51.953 Associated with SR-IOV VF: No 00:06:51.953 Max Data Transfer Size: 524288 00:06:51.953 Max Number of Namespaces: 256 00:06:51.953 Max Number of I/O Queues: 64 00:06:51.953 NVMe Specification Version (VS): 1.4 00:06:51.953 NVMe Specification Version (Identify): 1.4 00:06:51.953 Maximum Queue Entries: 2048 00:06:51.953 Contiguous Queues Required: Yes 00:06:51.953 Arbitration Mechanisms Supported 00:06:51.953 Weighted Round Robin: Not Supported 00:06:51.953 Vendor Specific: Not Supported 00:06:51.954 Reset Timeout: 7500 ms 00:06:51.954 Doorbell Stride: 4 bytes 00:06:51.954 NVM Subsystem Reset: Not Supported 00:06:51.954 Command Sets Supported 00:06:51.954 NVM Command Set: Supported 00:06:51.954 Boot Partition: Not Supported 00:06:51.954 Memory Page Size Minimum: 4096 bytes 00:06:51.954 Memory Page Size Maximum: 65536 bytes 00:06:51.954 Persistent Memory Region: Not Supported 00:06:51.954 Optional Asynchronous Events Supported 00:06:51.954 Namespace Attribute Notices: Supported 00:06:51.954 Firmware Activation Notices: Not Supported 00:06:51.954 ANA Change Notices: Not Supported 00:06:51.954 PLE Aggregate Log Change Notices: Not Supported 00:06:51.954 LBA Status Info Alert Notices: Not Supported 00:06:51.954 EGE Aggregate Log Change Notices: Not Supported 00:06:51.954 Normal NVM Subsystem Shutdown event: Not Supported 00:06:51.954 Zone Descriptor Change Notices: Not Supported 00:06:51.954 Discovery Log Change Notices: Not Supported 00:06:51.954 Controller Attributes 00:06:51.954 128-bit Host Identifier: Not Supported 00:06:51.954 Non-Operational Permissive Mode: Not Supported 00:06:51.954 NVM Sets: Not Supported 00:06:51.954 Read Recovery Levels: Not Supported 00:06:51.954 Endurance Groups: Supported 00:06:51.954 Predictable Latency Mode: Not Supported 00:06:51.954 Traffic Based Keep ALive: Not Supported 00:06:51.954 Namespace Granularity: Not Supported 00:06:51.954 SQ Associations: Not Supported 00:06:51.954 UUID List: Not Supported 00:06:51.954 Multi-Domain Subsystem: Not Supported 00:06:51.954 Fixed Capacity Management: Not Supported 00:06:51.954 Variable Capacity Management: Not Supported 00:06:51.954 Delete Endurance Group: Not Supported 00:06:51.954 Delete NVM Set: Not Supported 00:06:51.954 Extended LBA Formats Supported: Supported 00:06:51.954 Flexible Data Placement Supported: Supported 00:06:51.954 00:06:51.954 Controller Memory Buffer Support 00:06:51.954 ================================ 00:06:51.954 Supported: No 00:06:51.954 00:06:51.954 Persistent Memory Region Support 00:06:51.954 ================================ 00:06:51.954 Supported: No 00:06:51.954 00:06:51.954 Admin Command Set Attributes 00:06:51.954 ============================ 00:06:51.954 Security Send/Receive: Not Supported 00:06:51.954 Format NVM: Supported 00:06:51.954 Firmware Activate/Download: Not Supported 00:06:51.954 Namespace Management: Supported 00:06:51.954 Device Self-Test: Not Supported 00:06:51.954 Directives: Supported 00:06:51.954 NVMe-MI: Not Supported 00:06:51.954 Virtualization Management: Not Supported 00:06:51.954 Doorbell Buffer Config: Supported 00:06:51.954 Get LBA Status Capability: Not Supported 00:06:51.954 Command & Feature Lockdown Capability: Not Supported 00:06:51.954 Abort Command Limit: 4 00:06:51.954 Async Event Request Limit: 4 00:06:51.954 Number of Firmware Slots: N/A 00:06:51.954 Firmware Slot 1 Read-Only: N/A 00:06:51.954 Firmware Activation Without Reset: N/A 00:06:51.954 Multiple Update Detection Support: N/A 00:06:51.954 Firmware Update Granularity: No Information Provided 00:06:51.954 Per-Namespace SMART Log: Yes 00:06:51.954 Asymmetric Namespace Access Log Page: Not Supported 00:06:51.954 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:06:51.954 Command Effects Log Page: Supported 00:06:51.954 Get Log Page Extended Data: Supported 00:06:51.954 Telemetry Log Pages: Not Supported 00:06:51.954 Persistent Event Log Pages: Not Supported 00:06:51.954 Supported Log Pages Log Page: May Support 00:06:51.954 Commands Supported & Effects Log Page: Not Supported 00:06:51.954 Feature Identifiers & Effects Log Page:May Support 00:06:51.954 NVMe-MI Commands & Effects Log Page: May Support 00:06:51.954 Data Area 4 for Telemetry Log: Not Supported 00:06:51.954 Error Log Page Entries Supported: 1 00:06:51.954 Keep Alive: Not Supported 00:06:51.954 00:06:51.954 NVM Command Set Attributes 00:06:51.954 ========================== 00:06:51.954 Submission Queue Entry Size 00:06:51.954 Max: 64 00:06:51.954 Min: 64 00:06:51.954 Completion Queue Entry Size 00:06:51.954 Max: 16 00:06:51.954 Min: 16 00:06:51.954 Number of Namespaces: 256 00:06:51.954 Compare Command: Supported 00:06:51.954 Write Uncorrectable Command: Not Supported 00:06:51.954 Dataset Management Command: Supported 00:06:51.954 Write Zeroes Command: Supported 00:06:51.954 Set Features Save Field: Supported 00:06:51.954 Reservations: Not Supported 00:06:51.954 Timestamp: Supported 00:06:51.954 Copy: Supported 00:06:51.954 Volatile Write Cache: Present 00:06:51.954 Atomic Write Unit (Normal): 1 00:06:51.954 Atomic Write Unit (PFail): 1 00:06:51.954 Atomic Compare & Write Unit: 1 00:06:51.954 Fused Compare & Write: Not Supported 00:06:51.954 Scatter-Gather List 00:06:51.954 SGL Command Set: Supported 00:06:51.954 SGL Keyed: Not Supported 00:06:51.954 SGL Bit Bucket Descriptor: Not Supported 00:06:51.954 SGL Metadata Pointer: Not Supported 00:06:51.954 Oversized SGL: Not Supported 00:06:51.954 SGL Metadata Address: Not Supported 00:06:51.954 SGL Offset: Not Supported 00:06:51.954 Transport SGL Data Block: Not Supported 00:06:51.954 Replay Protected Memory Block: Not Supported 00:06:51.954 00:06:51.954 Firmware Slot Information 00:06:51.954 ========================= 00:06:51.954 Active slot: 1 00:06:51.954 Slot 1 Firmware Revision: 1.0 00:06:51.954 00:06:51.954 00:06:51.954 Commands Supported and Effects 00:06:51.954 ============================== 00:06:51.954 Admin Commands 00:06:51.954 -------------- 00:06:51.954 Delete I/O Submission Queue (00h): Supported 00:06:51.954 Create I/O Submission Queue (01h): Supported 00:06:51.954 Get Log Page (02h): Supported 00:06:51.954 Delete I/O Completion Queue (04h): Supported 00:06:51.954 Create I/O Completion Queue (05h): Supported 00:06:51.954 Identify (06h): Supported 00:06:51.954 Abort (08h): Supported 00:06:51.954 Set Features (09h): Supported 00:06:51.954 Get Features (0Ah): Supported 00:06:51.954 Asynchronous Event Request (0Ch): Supported 00:06:51.954 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:51.954 Directive Send (19h): Supported 00:06:51.954 Directive Receive (1Ah): Supported 00:06:51.954 Virtualization Management (1Ch): Supported 00:06:51.954 Doorbell Buffer Config (7Ch): Supported 00:06:51.954 Format NVM (80h): Supported LBA-Change 00:06:51.954 I/O Commands 00:06:51.954 ------------ 00:06:51.954 Flush (00h): Supported LBA-Change 00:06:51.954 Write (01h): Supported LBA-Change 00:06:51.954 Read (02h): Supported 00:06:51.954 Compare (05h): Supported 00:06:51.954 Write Zeroes (08h): Supported LBA-Change 00:06:51.954 Dataset Management (09h): Supported LBA-Change 00:06:51.954 Unknown (0Ch): Supported 00:06:51.954 Unknown (12h): Supported 00:06:51.954 Copy (19h): Supported LBA-Change 00:06:51.954 Unknown (1Dh): Supported LBA-Change 00:06:51.954 00:06:51.954 Error Log 00:06:51.954 ========= 00:06:51.954 00:06:51.954 Arbitration 00:06:51.954 =========== 00:06:51.954 Arbitration Burst: no limit 00:06:51.954 00:06:51.954 Power Management 00:06:51.954 ================ 00:06:51.954 Number of Power States: 1 00:06:51.954 Current Power State: Power State #0 00:06:51.954 Power State #0: 00:06:51.954 Max Power: 25.00 W 00:06:51.954 Non-Operational State: Operational 00:06:51.954 Entry Latency: 16 microseconds 00:06:51.954 Exit Latency: 4 microseconds 00:06:51.954 Relative Read Throughput: 0 00:06:51.954 Relative Read Latency: 0 00:06:51.954 Relative Write Throughput: 0 00:06:51.954 Relative Write Latency: 0 00:06:51.954 Idle Power: Not Reported 00:06:51.954 Active Power: Not Reported 00:06:51.954 Non-Operational Permissive Mode: Not Supported 00:06:51.954 00:06:51.954 Health Information 00:06:51.954 ================== 00:06:51.954 Critical Warnings: 00:06:51.954 Available Spare Space: OK 00:06:51.954 Temperature: OK 00:06:51.954 Device Reliability: OK 00:06:51.954 Read Only: No 00:06:51.954 Volatile Memory Backup: OK 00:06:51.954 Current Temperature: 323 Kelvin (50 Celsius) 00:06:51.954 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:51.954 Available Spare: 0% 00:06:51.954 Available Spare Threshold: 0% 00:06:51.954 Life Percentage Used: 0% 00:06:51.954 Data Units Read: 928 00:06:51.954 Data Units Written: 858 00:06:51.954 Host Read Commands: 40631 00:06:51.954 Host Write Commands: 40054 00:06:51.954 Controller Busy Time: 0 minutes 00:06:51.954 Power Cycles: 0 00:06:51.954 Power On Hours: 0 hours 00:06:51.954 Unsafe Shutdowns: 0 00:06:51.954 Unrecoverable Media Errors: 0 00:06:51.954 Lifetime Error Log Entries: 0 00:06:51.954 Warning Temperature Time: 0 minutes 00:06:51.954 Critical Temperature Time: 0 minutes 00:06:51.954 00:06:51.954 Number of Queues 00:06:51.954 ================ 00:06:51.954 Number of I/O Submission Queues: 64 00:06:51.954 Number of I/O Completion Queues: 64 00:06:51.954 00:06:51.954 ZNS Specific Controller Data 00:06:51.954 ============================ 00:06:51.954 Zone Append Size Limit: 0 00:06:51.954 00:06:51.954 00:06:51.954 Active Namespaces 00:06:51.954 ================= 00:06:51.954 Namespace ID:1 00:06:51.954 Error Recovery Timeout: Unlimited 00:06:51.955 Command Set Identifier: NVM (00h) 00:06:51.955 Deallocate: Supported 00:06:51.955 Deallocated/Unwritten Error: Supported 00:06:51.955 Deallocated Read Value: All 0x00 00:06:51.955 Deallocate in Write Zeroes: Not Supported 00:06:51.955 Deallocated Guard Field: 0xFFFF 00:06:51.955 Flush: Supported 00:06:51.955 Reservation: Not Supported 00:06:51.955 Namespace Sharing Capabilities: Multiple Controllers 00:06:51.955 Size (in LBAs): 262144 (1GiB) 00:06:51.955 Capacity (in LBAs): 262144 (1GiB) 00:06:51.955 Utilization (in LBAs): 262144 (1GiB) 00:06:51.955 Thin Provisioning: Not Supported 00:06:51.955 Per-NS Atomic Units: No 00:06:51.955 Maximum Single Source Range Length: 128 00:06:51.955 Maximum Copy Length: 128 00:06:51.955 Maximum Source Range Count: 128 00:06:51.955 NGUID/EUI64 Never Reused: No 00:06:51.955 Namespace Write Protected: No 00:06:51.955 Endurance group ID: 1 00:06:51.955 Number of LBA Formats: 8 00:06:51.955 Current LBA Format: LBA Format #04 00:06:51.955 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:51.955 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:51.955 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:51.955 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:51.955 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:51.955 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:51.955 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:51.955 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:51.955 00:06:51.955 Get Feature FDP: 00:06:51.955 ================ 00:06:51.955 Enabled: Yes 00:06:51.955 FDP configuration index: 0 00:06:51.955 00:06:51.955 FDP configurations log page 00:06:51.955 =========================== 00:06:51.955 Number of FDP configurations: 1 00:06:51.955 Version: 0 00:06:51.955 Size: 112 00:06:51.955 FDP Configuration Descriptor: 0 00:06:51.955 Descriptor Size: 96 00:06:51.955 Reclaim Group Identifier format: 2 00:06:51.955 FDP Volatile Write Cache: Not Present 00:06:51.955 FDP Configuration: Valid 00:06:51.955 Vendor Specific Size: 0 00:06:51.955 Number of Reclaim Groups: 2 00:06:51.955 Number of Recalim Unit Handles: 8 00:06:51.955 Max Placement Identifiers: 128 00:06:51.955 Number of Namespaces Suppprted: 256 00:06:51.955 Reclaim unit Nominal Size: 6000000 bytes 00:06:51.955 Estimated Reclaim Unit Time Limit: Not Reported 00:06:51.955 RUH Desc #000: RUH Type: Initially Isolated 00:06:51.955 RUH Desc #001: RUH Type: Initially Isolated 00:06:51.955 RUH Desc #002: RUH Type: Initially Isolated 00:06:51.955 RUH Desc #003: RUH Type: Initially Isolated 00:06:51.955 RUH Desc #004: RUH Type: Initially Isolated 00:06:51.955 RUH Desc #005: RUH Type: Initially Isolated 00:06:51.955 RUH Desc #006: RUH Type: Initially Isolated 00:06:51.955 RUH Desc #007: RUH Type: Initially Isolated 00:06:51.955 00:06:51.955 FDP reclaim unit handle usage log page 00:06:51.955 ====================================== 00:06:51.955 Number of Reclaim Unit Handles: 8 00:06:51.955 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:06:51.955 RUH Usage Desc #001: RUH Attributes: Unused 00:06:51.955 RUH Usage Desc #002: RUH Attributes: Unused 00:06:51.955 RUH Usage Desc #003: RUH Attributes: Unused 00:06:51.955 RUH Usage Desc #004: RUH Attributes: Unused 00:06:51.955 RUH Usage Desc #005: RUH Attributes: Unused 00:06:51.955 RUH Usage Desc #006: RUH Attributes: Unused 00:06:51.955 RUH Usage Desc #007: RUH Attributes: Unused 00:06:51.955 00:06:51.955 FDP statistics log page 00:06:51.955 ======================= 00:06:51.955 Host bytes with metadata written: 548380672 00:06:51.955 Med[2024-10-30 17:10:34.783367] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62692 terminated unexpected 00:06:51.955 ia bytes with metadata written: 548458496 00:06:51.955 Media bytes erased: 0 00:06:51.955 00:06:51.955 FDP events log page 00:06:51.955 =================== 00:06:51.955 Number of FDP events: 0 00:06:51.955 00:06:51.955 NVM Specific Namespace Data 00:06:51.955 =========================== 00:06:51.955 Logical Block Storage Tag Mask: 0 00:06:51.955 Protection Information Capabilities: 00:06:51.955 16b Guard Protection Information Storage Tag Support: No 00:06:51.955 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:51.955 Storage Tag Check Read Support: No 00:06:51.955 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.955 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.955 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.955 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.955 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.955 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.955 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.955 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.955 ===================================================== 00:06:51.955 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:06:51.955 ===================================================== 00:06:51.955 Controller Capabilities/Features 00:06:51.955 ================================ 00:06:51.955 Vendor ID: 1b36 00:06:51.955 Subsystem Vendor ID: 1af4 00:06:51.955 Serial Number: 12342 00:06:51.955 Model Number: QEMU NVMe Ctrl 00:06:51.955 Firmware Version: 8.0.0 00:06:51.955 Recommended Arb Burst: 6 00:06:51.955 IEEE OUI Identifier: 00 54 52 00:06:51.955 Multi-path I/O 00:06:51.955 May have multiple subsystem ports: No 00:06:51.955 May have multiple controllers: No 00:06:51.955 Associated with SR-IOV VF: No 00:06:51.955 Max Data Transfer Size: 524288 00:06:51.955 Max Number of Namespaces: 256 00:06:51.955 Max Number of I/O Queues: 64 00:06:51.955 NVMe Specification Version (VS): 1.4 00:06:51.955 NVMe Specification Version (Identify): 1.4 00:06:51.955 Maximum Queue Entries: 2048 00:06:51.955 Contiguous Queues Required: Yes 00:06:51.955 Arbitration Mechanisms Supported 00:06:51.955 Weighted Round Robin: Not Supported 00:06:51.955 Vendor Specific: Not Supported 00:06:51.955 Reset Timeout: 7500 ms 00:06:51.955 Doorbell Stride: 4 bytes 00:06:51.955 NVM Subsystem Reset: Not Supported 00:06:51.955 Command Sets Supported 00:06:51.955 NVM Command Set: Supported 00:06:51.955 Boot Partition: Not Supported 00:06:51.955 Memory Page Size Minimum: 4096 bytes 00:06:51.955 Memory Page Size Maximum: 65536 bytes 00:06:51.955 Persistent Memory Region: Not Supported 00:06:51.955 Optional Asynchronous Events Supported 00:06:51.955 Namespace Attribute Notices: Supported 00:06:51.955 Firmware Activation Notices: Not Supported 00:06:51.955 ANA Change Notices: Not Supported 00:06:51.955 PLE Aggregate Log Change Notices: Not Supported 00:06:51.955 LBA Status Info Alert Notices: Not Supported 00:06:51.955 EGE Aggregate Log Change Notices: Not Supported 00:06:51.955 Normal NVM Subsystem Shutdown event: Not Supported 00:06:51.955 Zone Descriptor Change Notices: Not Supported 00:06:51.955 Discovery Log Change Notices: Not Supported 00:06:51.955 Controller Attributes 00:06:51.955 128-bit Host Identifier: Not Supported 00:06:51.955 Non-Operational Permissive Mode: Not Supported 00:06:51.955 NVM Sets: Not Supported 00:06:51.955 Read Recovery Levels: Not Supported 00:06:51.955 Endurance Groups: Not Supported 00:06:51.955 Predictable Latency Mode: Not Supported 00:06:51.955 Traffic Based Keep ALive: Not Supported 00:06:51.955 Namespace Granularity: Not Supported 00:06:51.955 SQ Associations: Not Supported 00:06:51.955 UUID List: Not Supported 00:06:51.955 Multi-Domain Subsystem: Not Supported 00:06:51.955 Fixed Capacity Management: Not Supported 00:06:51.955 Variable Capacity Management: Not Supported 00:06:51.955 Delete Endurance Group: Not Supported 00:06:51.955 Delete NVM Set: Not Supported 00:06:51.955 Extended LBA Formats Supported: Supported 00:06:51.955 Flexible Data Placement Supported: Not Supported 00:06:51.955 00:06:51.955 Controller Memory Buffer Support 00:06:51.955 ================================ 00:06:51.956 Supported: No 00:06:51.956 00:06:51.956 Persistent Memory Region Support 00:06:51.956 ================================ 00:06:51.956 Supported: No 00:06:51.956 00:06:51.956 Admin Command Set Attributes 00:06:51.956 ============================ 00:06:51.956 Security Send/Receive: Not Supported 00:06:51.956 Format NVM: Supported 00:06:51.956 Firmware Activate/Download: Not Supported 00:06:51.956 Namespace Management: Supported 00:06:51.956 Device Self-Test: Not Supported 00:06:51.956 Directives: Supported 00:06:51.956 NVMe-MI: Not Supported 00:06:51.956 Virtualization Management: Not Supported 00:06:51.956 Doorbell Buffer Config: Supported 00:06:51.956 Get LBA Status Capability: Not Supported 00:06:51.956 Command & Feature Lockdown Capability: Not Supported 00:06:51.956 Abort Command Limit: 4 00:06:51.956 Async Event Request Limit: 4 00:06:51.956 Number of Firmware Slots: N/A 00:06:51.956 Firmware Slot 1 Read-Only: N/A 00:06:51.956 Firmware Activation Without Reset: N/A 00:06:51.956 Multiple Update Detection Support: N/A 00:06:51.956 Firmware Update Granularity: No Information Provided 00:06:51.956 Per-Namespace SMART Log: Yes 00:06:51.956 Asymmetric Namespace Access Log Page: Not Supported 00:06:51.956 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:06:51.956 Command Effects Log Page: Supported 00:06:51.956 Get Log Page Extended Data: Supported 00:06:51.956 Telemetry Log Pages: Not Supported 00:06:51.956 Persistent Event Log Pages: Not Supported 00:06:51.956 Supported Log Pages Log Page: May Support 00:06:51.956 Commands Supported & Effects Log Page: Not Supported 00:06:51.956 Feature Identifiers & Effects Log Page:May Support 00:06:51.956 NVMe-MI Commands & Effects Log Page: May Support 00:06:51.956 Data Area 4 for Telemetry Log: Not Supported 00:06:51.956 Error Log Page Entries Supported: 1 00:06:51.956 Keep Alive: Not Supported 00:06:51.956 00:06:51.956 NVM Command Set Attributes 00:06:51.956 ========================== 00:06:51.956 Submission Queue Entry Size 00:06:51.956 Max: 64 00:06:51.956 Min: 64 00:06:51.956 Completion Queue Entry Size 00:06:51.956 Max: 16 00:06:51.956 Min: 16 00:06:51.956 Number of Namespaces: 256 00:06:51.956 Compare Command: Supported 00:06:51.956 Write Uncorrectable Command: Not Supported 00:06:51.956 Dataset Management Command: Supported 00:06:51.956 Write Zeroes Command: Supported 00:06:51.956 Set Features Save Field: Supported 00:06:51.956 Reservations: Not Supported 00:06:51.956 Timestamp: Supported 00:06:51.956 Copy: Supported 00:06:51.956 Volatile Write Cache: Present 00:06:51.956 Atomic Write Unit (Normal): 1 00:06:51.956 Atomic Write Unit (PFail): 1 00:06:51.956 Atomic Compare & Write Unit: 1 00:06:51.956 Fused Compare & Write: Not Supported 00:06:51.956 Scatter-Gather List 00:06:51.956 SGL Command Set: Supported 00:06:51.956 SGL Keyed: Not Supported 00:06:51.956 SGL Bit Bucket Descriptor: Not Supported 00:06:51.956 SGL Metadata Pointer: Not Supported 00:06:51.956 Oversized SGL: Not Supported 00:06:51.956 SGL Metadata Address: Not Supported 00:06:51.956 SGL Offset: Not Supported 00:06:51.956 Transport SGL Data Block: Not Supported 00:06:51.956 Replay Protected Memory Block: Not Supported 00:06:51.956 00:06:51.956 Firmware Slot Information 00:06:51.956 ========================= 00:06:51.956 Active slot: 1 00:06:51.956 Slot 1 Firmware Revision: 1.0 00:06:51.956 00:06:51.956 00:06:51.956 Commands Supported and Effects 00:06:51.956 ============================== 00:06:51.956 Admin Commands 00:06:51.956 -------------- 00:06:51.956 Delete I/O Submission Queue (00h): Supported 00:06:51.956 Create I/O Submission Queue (01h): Supported 00:06:51.956 Get Log Page (02h): Supported 00:06:51.956 Delete I/O Completion Queue (04h): Supported 00:06:51.956 Create I/O Completion Queue (05h): Supported 00:06:51.956 Identify (06h): Supported 00:06:51.956 Abort (08h): Supported 00:06:51.956 Set Features (09h): Supported 00:06:51.956 Get Features (0Ah): Supported 00:06:51.956 Asynchronous Event Request (0Ch): Supported 00:06:51.956 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:51.956 Directive Send (19h): Supported 00:06:51.956 Directive Receive (1Ah): Supported 00:06:51.956 Virtualization Management (1Ch): Supported 00:06:51.956 Doorbell Buffer Config (7Ch): Supported 00:06:51.956 Format NVM (80h): Supported LBA-Change 00:06:51.956 I/O Commands 00:06:51.956 ------------ 00:06:51.956 Flush (00h): Supported LBA-Change 00:06:51.956 Write (01h): Supported LBA-Change 00:06:51.956 Read (02h): Supported 00:06:51.956 Compare (05h): Supported 00:06:51.956 Write Zeroes (08h): Supported LBA-Change 00:06:51.956 Dataset Management (09h): Supported LBA-Change 00:06:51.956 Unknown (0Ch): Supported 00:06:51.956 Unknown (12h): Supported 00:06:51.956 Copy (19h): Supported LBA-Change 00:06:51.956 Unknown (1Dh): Supported LBA-Change 00:06:51.956 00:06:51.956 Error Log 00:06:51.956 ========= 00:06:51.956 00:06:51.956 Arbitration 00:06:51.956 =========== 00:06:51.956 Arbitration Burst: no limit 00:06:51.956 00:06:51.956 Power Management 00:06:51.956 ================ 00:06:51.956 Number of Power States: 1 00:06:51.956 Current Power State: Power State #0 00:06:51.956 Power State #0: 00:06:51.956 Max Power: 25.00 W 00:06:51.956 Non-Operational State: Operational 00:06:51.956 Entry Latency: 16 microseconds 00:06:51.956 Exit Latency: 4 microseconds 00:06:51.956 Relative Read Throughput: 0 00:06:51.956 Relative Read Latency: 0 00:06:51.956 Relative Write Throughput: 0 00:06:51.956 Relative Write Latency: 0 00:06:51.956 Idle Power: Not Reported 00:06:51.956 Active Power: Not Reported 00:06:51.956 Non-Operational Permissive Mode: Not Supported 00:06:51.956 00:06:51.956 Health Information 00:06:51.956 ================== 00:06:51.956 Critical Warnings: 00:06:51.956 Available Spare Space: OK 00:06:51.956 Temperature: OK 00:06:51.956 Device Reliability: OK 00:06:51.956 Read Only: No 00:06:51.956 Volatile Memory Backup: OK 00:06:51.956 Current Temperature: 323 Kelvin (50 Celsius) 00:06:51.956 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:51.956 Available Spare: 0% 00:06:51.956 Available Spare Threshold: 0% 00:06:51.956 Life Percentage Used: 0% 00:06:51.956 Data Units Read: 2368 00:06:51.956 Data Units Written: 2155 00:06:51.956 Host Read Commands: 117857 00:06:51.956 Host Write Commands: 116126 00:06:51.956 Controller Busy Time: 0 minutes 00:06:51.956 Power Cycles: 0 00:06:51.956 Power On Hours: 0 hours 00:06:51.956 Unsafe Shutdowns: 0 00:06:51.956 Unrecoverable Media Errors: 0 00:06:51.956 Lifetime Error Log Entries: 0 00:06:51.956 Warning Temperature Time: 0 minutes 00:06:51.956 Critical Temperature Time: 0 minutes 00:06:51.956 00:06:51.956 Number of Queues 00:06:51.956 ================ 00:06:51.956 Number of I/O Submission Queues: 64 00:06:51.956 Number of I/O Completion Queues: 64 00:06:51.956 00:06:51.956 ZNS Specific Controller Data 00:06:51.956 ============================ 00:06:51.956 Zone Append Size Limit: 0 00:06:51.956 00:06:51.956 00:06:51.956 Active Namespaces 00:06:51.956 ================= 00:06:51.956 Namespace ID:1 00:06:51.956 Error Recovery Timeout: Unlimited 00:06:51.956 Command Set Identifier: NVM (00h) 00:06:51.956 Deallocate: Supported 00:06:51.956 Deallocated/Unwritten Error: Supported 00:06:51.956 Deallocated Read Value: All 0x00 00:06:51.956 Deallocate in Write Zeroes: Not Supported 00:06:51.956 Deallocated Guard Field: 0xFFFF 00:06:51.956 Flush: Supported 00:06:51.956 Reservation: Not Supported 00:06:51.956 Namespace Sharing Capabilities: Private 00:06:51.956 Size (in LBAs): 1048576 (4GiB) 00:06:51.956 Capacity (in LBAs): 1048576 (4GiB) 00:06:51.956 Utilization (in LBAs): 1048576 (4GiB) 00:06:51.956 Thin Provisioning: Not Supported 00:06:51.956 Per-NS Atomic Units: No 00:06:51.956 Maximum Single Source Range Length: 128 00:06:51.956 Maximum Copy Length: 128 00:06:51.956 Maximum Source Range Count: 128 00:06:51.956 NGUID/EUI64 Never Reused: No 00:06:51.956 Namespace Write Protected: No 00:06:51.956 Number of LBA Formats: 8 00:06:51.956 Current LBA Format: LBA Format #04 00:06:51.956 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:51.956 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:51.956 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:51.956 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:51.956 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:51.956 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:51.956 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:51.956 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:51.956 00:06:51.956 NVM Specific Namespace Data 00:06:51.956 =========================== 00:06:51.956 Logical Block Storage Tag Mask: 0 00:06:51.956 Protection Information Capabilities: 00:06:51.956 16b Guard Protection Information Storage Tag Support: No 00:06:51.956 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:51.956 Storage Tag Check Read Support: No 00:06:51.956 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.956 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.956 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Namespace ID:2 00:06:51.957 Error Recovery Timeout: Unlimited 00:06:51.957 Command Set Identifier: NVM (00h) 00:06:51.957 Deallocate: Supported 00:06:51.957 Deallocated/Unwritten Error: Supported 00:06:51.957 Deallocated Read Value: All 0x00 00:06:51.957 Deallocate in Write Zeroes: Not Supported 00:06:51.957 Deallocated Guard Field: 0xFFFF 00:06:51.957 Flush: Supported 00:06:51.957 Reservation: Not Supported 00:06:51.957 Namespace Sharing Capabilities: Private 00:06:51.957 Size (in LBAs): 1048576 (4GiB) 00:06:51.957 Capacity (in LBAs): 1048576 (4GiB) 00:06:51.957 Utilization (in LBAs): 1048576 (4GiB) 00:06:51.957 Thin Provisioning: Not Supported 00:06:51.957 Per-NS Atomic Units: No 00:06:51.957 Maximum Single Source Range Length: 128 00:06:51.957 Maximum Copy Length: 128 00:06:51.957 Maximum Source Range Count: 128 00:06:51.957 NGUID/EUI64 Never Reused: No 00:06:51.957 Namespace Write Protected: No 00:06:51.957 Number of LBA Formats: 8 00:06:51.957 Current LBA Format: LBA Format #04 00:06:51.957 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:51.957 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:51.957 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:51.957 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:51.957 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:51.957 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:51.957 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:51.957 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:51.957 00:06:51.957 NVM Specific Namespace Data 00:06:51.957 =========================== 00:06:51.957 Logical Block Storage Tag Mask: 0 00:06:51.957 Protection Information Capabilities: 00:06:51.957 16b Guard Protection Information Storage Tag Support: No 00:06:51.957 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:51.957 Storage Tag Check Read Support: No 00:06:51.957 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Namespace ID:3 00:06:51.957 Error Recovery Timeout: Unlimited 00:06:51.957 Command Set Identifier: NVM (00h) 00:06:51.957 Deallocate: Supported 00:06:51.957 Deallocated/Unwritten Error: Supported 00:06:51.957 Deallocated Read Value: All 0x00 00:06:51.957 Deallocate in Write Zeroes: Not Supported 00:06:51.957 Deallocated Guard Field: 0xFFFF 00:06:51.957 Flush: Supported 00:06:51.957 Reservation: Not Supported 00:06:51.957 Namespace Sharing Capabilities: Private 00:06:51.957 Size (in LBAs): 1048576 (4GiB) 00:06:51.957 Capacity (in LBAs): 1048576 (4GiB) 00:06:51.957 Utilization (in LBAs): 1048576 (4GiB) 00:06:51.957 Thin Provisioning: Not Supported 00:06:51.957 Per-NS Atomic Units: No 00:06:51.957 Maximum Single Source Range Length: 128 00:06:51.957 Maximum Copy Length: 128 00:06:51.957 Maximum Source Range Count: 128 00:06:51.957 NGUID/EUI64 Never Reused: No 00:06:51.957 Namespace Write Protected: No 00:06:51.957 Number of LBA Formats: 8 00:06:51.957 Current LBA Format: LBA Format #04 00:06:51.957 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:51.957 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:51.957 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:51.957 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:51.957 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:51.957 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:51.957 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:51.957 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:51.957 00:06:51.957 NVM Specific Namespace Data 00:06:51.957 =========================== 00:06:51.957 Logical Block Storage Tag Mask: 0 00:06:51.957 Protection Information Capabilities: 00:06:51.957 16b Guard Protection Information Storage Tag Support: No 00:06:51.957 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:51.957 Storage Tag Check Read Support: No 00:06:51.957 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:51.957 17:10:34 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:06:51.957 17:10:34 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:06:52.216 ===================================================== 00:06:52.216 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:06:52.216 ===================================================== 00:06:52.216 Controller Capabilities/Features 00:06:52.216 ================================ 00:06:52.216 Vendor ID: 1b36 00:06:52.216 Subsystem Vendor ID: 1af4 00:06:52.216 Serial Number: 12340 00:06:52.216 Model Number: QEMU NVMe Ctrl 00:06:52.216 Firmware Version: 8.0.0 00:06:52.216 Recommended Arb Burst: 6 00:06:52.216 IEEE OUI Identifier: 00 54 52 00:06:52.216 Multi-path I/O 00:06:52.216 May have multiple subsystem ports: No 00:06:52.216 May have multiple controllers: No 00:06:52.216 Associated with SR-IOV VF: No 00:06:52.216 Max Data Transfer Size: 524288 00:06:52.216 Max Number of Namespaces: 256 00:06:52.216 Max Number of I/O Queues: 64 00:06:52.216 NVMe Specification Version (VS): 1.4 00:06:52.216 NVMe Specification Version (Identify): 1.4 00:06:52.216 Maximum Queue Entries: 2048 00:06:52.216 Contiguous Queues Required: Yes 00:06:52.216 Arbitration Mechanisms Supported 00:06:52.216 Weighted Round Robin: Not Supported 00:06:52.216 Vendor Specific: Not Supported 00:06:52.216 Reset Timeout: 7500 ms 00:06:52.216 Doorbell Stride: 4 bytes 00:06:52.216 NVM Subsystem Reset: Not Supported 00:06:52.216 Command Sets Supported 00:06:52.216 NVM Command Set: Supported 00:06:52.216 Boot Partition: Not Supported 00:06:52.216 Memory Page Size Minimum: 4096 bytes 00:06:52.216 Memory Page Size Maximum: 65536 bytes 00:06:52.216 Persistent Memory Region: Not Supported 00:06:52.216 Optional Asynchronous Events Supported 00:06:52.216 Namespace Attribute Notices: Supported 00:06:52.216 Firmware Activation Notices: Not Supported 00:06:52.216 ANA Change Notices: Not Supported 00:06:52.216 PLE Aggregate Log Change Notices: Not Supported 00:06:52.216 LBA Status Info Alert Notices: Not Supported 00:06:52.216 EGE Aggregate Log Change Notices: Not Supported 00:06:52.216 Normal NVM Subsystem Shutdown event: Not Supported 00:06:52.216 Zone Descriptor Change Notices: Not Supported 00:06:52.216 Discovery Log Change Notices: Not Supported 00:06:52.216 Controller Attributes 00:06:52.216 128-bit Host Identifier: Not Supported 00:06:52.216 Non-Operational Permissive Mode: Not Supported 00:06:52.216 NVM Sets: Not Supported 00:06:52.216 Read Recovery Levels: Not Supported 00:06:52.216 Endurance Groups: Not Supported 00:06:52.216 Predictable Latency Mode: Not Supported 00:06:52.216 Traffic Based Keep ALive: Not Supported 00:06:52.216 Namespace Granularity: Not Supported 00:06:52.216 SQ Associations: Not Supported 00:06:52.216 UUID List: Not Supported 00:06:52.216 Multi-Domain Subsystem: Not Supported 00:06:52.216 Fixed Capacity Management: Not Supported 00:06:52.216 Variable Capacity Management: Not Supported 00:06:52.216 Delete Endurance Group: Not Supported 00:06:52.216 Delete NVM Set: Not Supported 00:06:52.216 Extended LBA Formats Supported: Supported 00:06:52.216 Flexible Data Placement Supported: Not Supported 00:06:52.216 00:06:52.216 Controller Memory Buffer Support 00:06:52.216 ================================ 00:06:52.216 Supported: No 00:06:52.216 00:06:52.216 Persistent Memory Region Support 00:06:52.216 ================================ 00:06:52.216 Supported: No 00:06:52.216 00:06:52.216 Admin Command Set Attributes 00:06:52.216 ============================ 00:06:52.216 Security Send/Receive: Not Supported 00:06:52.216 Format NVM: Supported 00:06:52.216 Firmware Activate/Download: Not Supported 00:06:52.216 Namespace Management: Supported 00:06:52.216 Device Self-Test: Not Supported 00:06:52.216 Directives: Supported 00:06:52.216 NVMe-MI: Not Supported 00:06:52.216 Virtualization Management: Not Supported 00:06:52.216 Doorbell Buffer Config: Supported 00:06:52.216 Get LBA Status Capability: Not Supported 00:06:52.216 Command & Feature Lockdown Capability: Not Supported 00:06:52.216 Abort Command Limit: 4 00:06:52.216 Async Event Request Limit: 4 00:06:52.216 Number of Firmware Slots: N/A 00:06:52.216 Firmware Slot 1 Read-Only: N/A 00:06:52.216 Firmware Activation Without Reset: N/A 00:06:52.216 Multiple Update Detection Support: N/A 00:06:52.216 Firmware Update Granularity: No Information Provided 00:06:52.216 Per-Namespace SMART Log: Yes 00:06:52.216 Asymmetric Namespace Access Log Page: Not Supported 00:06:52.216 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:06:52.216 Command Effects Log Page: Supported 00:06:52.216 Get Log Page Extended Data: Supported 00:06:52.216 Telemetry Log Pages: Not Supported 00:06:52.216 Persistent Event Log Pages: Not Supported 00:06:52.216 Supported Log Pages Log Page: May Support 00:06:52.216 Commands Supported & Effects Log Page: Not Supported 00:06:52.216 Feature Identifiers & Effects Log Page:May Support 00:06:52.216 NVMe-MI Commands & Effects Log Page: May Support 00:06:52.216 Data Area 4 for Telemetry Log: Not Supported 00:06:52.216 Error Log Page Entries Supported: 1 00:06:52.216 Keep Alive: Not Supported 00:06:52.216 00:06:52.216 NVM Command Set Attributes 00:06:52.216 ========================== 00:06:52.216 Submission Queue Entry Size 00:06:52.217 Max: 64 00:06:52.217 Min: 64 00:06:52.217 Completion Queue Entry Size 00:06:52.217 Max: 16 00:06:52.217 Min: 16 00:06:52.217 Number of Namespaces: 256 00:06:52.217 Compare Command: Supported 00:06:52.217 Write Uncorrectable Command: Not Supported 00:06:52.217 Dataset Management Command: Supported 00:06:52.217 Write Zeroes Command: Supported 00:06:52.217 Set Features Save Field: Supported 00:06:52.217 Reservations: Not Supported 00:06:52.217 Timestamp: Supported 00:06:52.217 Copy: Supported 00:06:52.217 Volatile Write Cache: Present 00:06:52.217 Atomic Write Unit (Normal): 1 00:06:52.217 Atomic Write Unit (PFail): 1 00:06:52.217 Atomic Compare & Write Unit: 1 00:06:52.217 Fused Compare & Write: Not Supported 00:06:52.217 Scatter-Gather List 00:06:52.217 SGL Command Set: Supported 00:06:52.217 SGL Keyed: Not Supported 00:06:52.217 SGL Bit Bucket Descriptor: Not Supported 00:06:52.217 SGL Metadata Pointer: Not Supported 00:06:52.217 Oversized SGL: Not Supported 00:06:52.217 SGL Metadata Address: Not Supported 00:06:52.217 SGL Offset: Not Supported 00:06:52.217 Transport SGL Data Block: Not Supported 00:06:52.217 Replay Protected Memory Block: Not Supported 00:06:52.217 00:06:52.217 Firmware Slot Information 00:06:52.217 ========================= 00:06:52.217 Active slot: 1 00:06:52.217 Slot 1 Firmware Revision: 1.0 00:06:52.217 00:06:52.217 00:06:52.217 Commands Supported and Effects 00:06:52.217 ============================== 00:06:52.217 Admin Commands 00:06:52.217 -------------- 00:06:52.217 Delete I/O Submission Queue (00h): Supported 00:06:52.217 Create I/O Submission Queue (01h): Supported 00:06:52.217 Get Log Page (02h): Supported 00:06:52.217 Delete I/O Completion Queue (04h): Supported 00:06:52.217 Create I/O Completion Queue (05h): Supported 00:06:52.217 Identify (06h): Supported 00:06:52.217 Abort (08h): Supported 00:06:52.217 Set Features (09h): Supported 00:06:52.217 Get Features (0Ah): Supported 00:06:52.217 Asynchronous Event Request (0Ch): Supported 00:06:52.217 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:52.217 Directive Send (19h): Supported 00:06:52.217 Directive Receive (1Ah): Supported 00:06:52.217 Virtualization Management (1Ch): Supported 00:06:52.217 Doorbell Buffer Config (7Ch): Supported 00:06:52.217 Format NVM (80h): Supported LBA-Change 00:06:52.217 I/O Commands 00:06:52.217 ------------ 00:06:52.217 Flush (00h): Supported LBA-Change 00:06:52.217 Write (01h): Supported LBA-Change 00:06:52.217 Read (02h): Supported 00:06:52.217 Compare (05h): Supported 00:06:52.217 Write Zeroes (08h): Supported LBA-Change 00:06:52.217 Dataset Management (09h): Supported LBA-Change 00:06:52.217 Unknown (0Ch): Supported 00:06:52.217 Unknown (12h): Supported 00:06:52.217 Copy (19h): Supported LBA-Change 00:06:52.217 Unknown (1Dh): Supported LBA-Change 00:06:52.217 00:06:52.217 Error Log 00:06:52.217 ========= 00:06:52.217 00:06:52.217 Arbitration 00:06:52.217 =========== 00:06:52.217 Arbitration Burst: no limit 00:06:52.217 00:06:52.217 Power Management 00:06:52.217 ================ 00:06:52.217 Number of Power States: 1 00:06:52.217 Current Power State: Power State #0 00:06:52.217 Power State #0: 00:06:52.217 Max Power: 25.00 W 00:06:52.217 Non-Operational State: Operational 00:06:52.217 Entry Latency: 16 microseconds 00:06:52.217 Exit Latency: 4 microseconds 00:06:52.217 Relative Read Throughput: 0 00:06:52.217 Relative Read Latency: 0 00:06:52.217 Relative Write Throughput: 0 00:06:52.217 Relative Write Latency: 0 00:06:52.217 Idle Power: Not Reported 00:06:52.217 Active Power: Not Reported 00:06:52.217 Non-Operational Permissive Mode: Not Supported 00:06:52.217 00:06:52.217 Health Information 00:06:52.217 ================== 00:06:52.217 Critical Warnings: 00:06:52.217 Available Spare Space: OK 00:06:52.217 Temperature: OK 00:06:52.217 Device Reliability: OK 00:06:52.217 Read Only: No 00:06:52.217 Volatile Memory Backup: OK 00:06:52.217 Current Temperature: 323 Kelvin (50 Celsius) 00:06:52.217 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:52.217 Available Spare: 0% 00:06:52.217 Available Spare Threshold: 0% 00:06:52.217 Life Percentage Used: 0% 00:06:52.217 Data Units Read: 732 00:06:52.217 Data Units Written: 660 00:06:52.217 Host Read Commands: 38619 00:06:52.217 Host Write Commands: 38405 00:06:52.217 Controller Busy Time: 0 minutes 00:06:52.217 Power Cycles: 0 00:06:52.217 Power On Hours: 0 hours 00:06:52.217 Unsafe Shutdowns: 0 00:06:52.217 Unrecoverable Media Errors: 0 00:06:52.217 Lifetime Error Log Entries: 0 00:06:52.217 Warning Temperature Time: 0 minutes 00:06:52.217 Critical Temperature Time: 0 minutes 00:06:52.217 00:06:52.217 Number of Queues 00:06:52.217 ================ 00:06:52.217 Number of I/O Submission Queues: 64 00:06:52.217 Number of I/O Completion Queues: 64 00:06:52.217 00:06:52.217 ZNS Specific Controller Data 00:06:52.217 ============================ 00:06:52.217 Zone Append Size Limit: 0 00:06:52.217 00:06:52.217 00:06:52.217 Active Namespaces 00:06:52.217 ================= 00:06:52.217 Namespace ID:1 00:06:52.217 Error Recovery Timeout: Unlimited 00:06:52.217 Command Set Identifier: NVM (00h) 00:06:52.217 Deallocate: Supported 00:06:52.217 Deallocated/Unwritten Error: Supported 00:06:52.217 Deallocated Read Value: All 0x00 00:06:52.217 Deallocate in Write Zeroes: Not Supported 00:06:52.217 Deallocated Guard Field: 0xFFFF 00:06:52.217 Flush: Supported 00:06:52.217 Reservation: Not Supported 00:06:52.217 Metadata Transferred as: Separate Metadata Buffer 00:06:52.217 Namespace Sharing Capabilities: Private 00:06:52.217 Size (in LBAs): 1548666 (5GiB) 00:06:52.217 Capacity (in LBAs): 1548666 (5GiB) 00:06:52.217 Utilization (in LBAs): 1548666 (5GiB) 00:06:52.217 Thin Provisioning: Not Supported 00:06:52.217 Per-NS Atomic Units: No 00:06:52.217 Maximum Single Source Range Length: 128 00:06:52.217 Maximum Copy Length: 128 00:06:52.217 Maximum Source Range Count: 128 00:06:52.217 NGUID/EUI64 Never Reused: No 00:06:52.217 Namespace Write Protected: No 00:06:52.217 Number of LBA Formats: 8 00:06:52.217 Current LBA Format: LBA Format #07 00:06:52.217 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:52.217 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:52.217 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:52.217 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:52.217 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:52.217 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:52.217 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:52.217 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:52.217 00:06:52.217 NVM Specific Namespace Data 00:06:52.217 =========================== 00:06:52.217 Logical Block Storage Tag Mask: 0 00:06:52.217 Protection Information Capabilities: 00:06:52.217 16b Guard Protection Information Storage Tag Support: No 00:06:52.217 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:52.217 Storage Tag Check Read Support: No 00:06:52.217 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.217 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.217 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.217 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.217 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.217 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.217 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.217 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.217 17:10:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:06:52.217 17:10:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:06:52.476 ===================================================== 00:06:52.476 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:06:52.476 ===================================================== 00:06:52.476 Controller Capabilities/Features 00:06:52.476 ================================ 00:06:52.476 Vendor ID: 1b36 00:06:52.476 Subsystem Vendor ID: 1af4 00:06:52.476 Serial Number: 12341 00:06:52.476 Model Number: QEMU NVMe Ctrl 00:06:52.476 Firmware Version: 8.0.0 00:06:52.476 Recommended Arb Burst: 6 00:06:52.476 IEEE OUI Identifier: 00 54 52 00:06:52.476 Multi-path I/O 00:06:52.476 May have multiple subsystem ports: No 00:06:52.476 May have multiple controllers: No 00:06:52.476 Associated with SR-IOV VF: No 00:06:52.476 Max Data Transfer Size: 524288 00:06:52.476 Max Number of Namespaces: 256 00:06:52.476 Max Number of I/O Queues: 64 00:06:52.476 NVMe Specification Version (VS): 1.4 00:06:52.476 NVMe Specification Version (Identify): 1.4 00:06:52.476 Maximum Queue Entries: 2048 00:06:52.476 Contiguous Queues Required: Yes 00:06:52.476 Arbitration Mechanisms Supported 00:06:52.476 Weighted Round Robin: Not Supported 00:06:52.476 Vendor Specific: Not Supported 00:06:52.476 Reset Timeout: 7500 ms 00:06:52.476 Doorbell Stride: 4 bytes 00:06:52.476 NVM Subsystem Reset: Not Supported 00:06:52.476 Command Sets Supported 00:06:52.476 NVM Command Set: Supported 00:06:52.476 Boot Partition: Not Supported 00:06:52.476 Memory Page Size Minimum: 4096 bytes 00:06:52.476 Memory Page Size Maximum: 65536 bytes 00:06:52.476 Persistent Memory Region: Not Supported 00:06:52.476 Optional Asynchronous Events Supported 00:06:52.476 Namespace Attribute Notices: Supported 00:06:52.476 Firmware Activation Notices: Not Supported 00:06:52.476 ANA Change Notices: Not Supported 00:06:52.476 PLE Aggregate Log Change Notices: Not Supported 00:06:52.476 LBA Status Info Alert Notices: Not Supported 00:06:52.476 EGE Aggregate Log Change Notices: Not Supported 00:06:52.476 Normal NVM Subsystem Shutdown event: Not Supported 00:06:52.476 Zone Descriptor Change Notices: Not Supported 00:06:52.476 Discovery Log Change Notices: Not Supported 00:06:52.476 Controller Attributes 00:06:52.476 128-bit Host Identifier: Not Supported 00:06:52.476 Non-Operational Permissive Mode: Not Supported 00:06:52.476 NVM Sets: Not Supported 00:06:52.476 Read Recovery Levels: Not Supported 00:06:52.476 Endurance Groups: Not Supported 00:06:52.476 Predictable Latency Mode: Not Supported 00:06:52.476 Traffic Based Keep ALive: Not Supported 00:06:52.476 Namespace Granularity: Not Supported 00:06:52.476 SQ Associations: Not Supported 00:06:52.476 UUID List: Not Supported 00:06:52.476 Multi-Domain Subsystem: Not Supported 00:06:52.476 Fixed Capacity Management: Not Supported 00:06:52.476 Variable Capacity Management: Not Supported 00:06:52.476 Delete Endurance Group: Not Supported 00:06:52.476 Delete NVM Set: Not Supported 00:06:52.476 Extended LBA Formats Supported: Supported 00:06:52.476 Flexible Data Placement Supported: Not Supported 00:06:52.476 00:06:52.476 Controller Memory Buffer Support 00:06:52.476 ================================ 00:06:52.476 Supported: No 00:06:52.476 00:06:52.476 Persistent Memory Region Support 00:06:52.476 ================================ 00:06:52.476 Supported: No 00:06:52.476 00:06:52.476 Admin Command Set Attributes 00:06:52.476 ============================ 00:06:52.476 Security Send/Receive: Not Supported 00:06:52.476 Format NVM: Supported 00:06:52.476 Firmware Activate/Download: Not Supported 00:06:52.476 Namespace Management: Supported 00:06:52.476 Device Self-Test: Not Supported 00:06:52.476 Directives: Supported 00:06:52.476 NVMe-MI: Not Supported 00:06:52.476 Virtualization Management: Not Supported 00:06:52.476 Doorbell Buffer Config: Supported 00:06:52.476 Get LBA Status Capability: Not Supported 00:06:52.476 Command & Feature Lockdown Capability: Not Supported 00:06:52.476 Abort Command Limit: 4 00:06:52.476 Async Event Request Limit: 4 00:06:52.476 Number of Firmware Slots: N/A 00:06:52.476 Firmware Slot 1 Read-Only: N/A 00:06:52.476 Firmware Activation Without Reset: N/A 00:06:52.477 Multiple Update Detection Support: N/A 00:06:52.477 Firmware Update Granularity: No Information Provided 00:06:52.477 Per-Namespace SMART Log: Yes 00:06:52.477 Asymmetric Namespace Access Log Page: Not Supported 00:06:52.477 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:06:52.477 Command Effects Log Page: Supported 00:06:52.477 Get Log Page Extended Data: Supported 00:06:52.477 Telemetry Log Pages: Not Supported 00:06:52.477 Persistent Event Log Pages: Not Supported 00:06:52.477 Supported Log Pages Log Page: May Support 00:06:52.477 Commands Supported & Effects Log Page: Not Supported 00:06:52.477 Feature Identifiers & Effects Log Page:May Support 00:06:52.477 NVMe-MI Commands & Effects Log Page: May Support 00:06:52.477 Data Area 4 for Telemetry Log: Not Supported 00:06:52.477 Error Log Page Entries Supported: 1 00:06:52.477 Keep Alive: Not Supported 00:06:52.477 00:06:52.477 NVM Command Set Attributes 00:06:52.477 ========================== 00:06:52.477 Submission Queue Entry Size 00:06:52.477 Max: 64 00:06:52.477 Min: 64 00:06:52.477 Completion Queue Entry Size 00:06:52.477 Max: 16 00:06:52.477 Min: 16 00:06:52.477 Number of Namespaces: 256 00:06:52.477 Compare Command: Supported 00:06:52.477 Write Uncorrectable Command: Not Supported 00:06:52.477 Dataset Management Command: Supported 00:06:52.477 Write Zeroes Command: Supported 00:06:52.477 Set Features Save Field: Supported 00:06:52.477 Reservations: Not Supported 00:06:52.477 Timestamp: Supported 00:06:52.477 Copy: Supported 00:06:52.477 Volatile Write Cache: Present 00:06:52.477 Atomic Write Unit (Normal): 1 00:06:52.477 Atomic Write Unit (PFail): 1 00:06:52.477 Atomic Compare & Write Unit: 1 00:06:52.477 Fused Compare & Write: Not Supported 00:06:52.477 Scatter-Gather List 00:06:52.477 SGL Command Set: Supported 00:06:52.477 SGL Keyed: Not Supported 00:06:52.477 SGL Bit Bucket Descriptor: Not Supported 00:06:52.477 SGL Metadata Pointer: Not Supported 00:06:52.477 Oversized SGL: Not Supported 00:06:52.477 SGL Metadata Address: Not Supported 00:06:52.477 SGL Offset: Not Supported 00:06:52.477 Transport SGL Data Block: Not Supported 00:06:52.477 Replay Protected Memory Block: Not Supported 00:06:52.477 00:06:52.477 Firmware Slot Information 00:06:52.477 ========================= 00:06:52.477 Active slot: 1 00:06:52.477 Slot 1 Firmware Revision: 1.0 00:06:52.477 00:06:52.477 00:06:52.477 Commands Supported and Effects 00:06:52.477 ============================== 00:06:52.477 Admin Commands 00:06:52.477 -------------- 00:06:52.477 Delete I/O Submission Queue (00h): Supported 00:06:52.477 Create I/O Submission Queue (01h): Supported 00:06:52.477 Get Log Page (02h): Supported 00:06:52.477 Delete I/O Completion Queue (04h): Supported 00:06:52.477 Create I/O Completion Queue (05h): Supported 00:06:52.477 Identify (06h): Supported 00:06:52.477 Abort (08h): Supported 00:06:52.477 Set Features (09h): Supported 00:06:52.477 Get Features (0Ah): Supported 00:06:52.477 Asynchronous Event Request (0Ch): Supported 00:06:52.477 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:52.477 Directive Send (19h): Supported 00:06:52.477 Directive Receive (1Ah): Supported 00:06:52.477 Virtualization Management (1Ch): Supported 00:06:52.477 Doorbell Buffer Config (7Ch): Supported 00:06:52.477 Format NVM (80h): Supported LBA-Change 00:06:52.477 I/O Commands 00:06:52.477 ------------ 00:06:52.477 Flush (00h): Supported LBA-Change 00:06:52.477 Write (01h): Supported LBA-Change 00:06:52.477 Read (02h): Supported 00:06:52.477 Compare (05h): Supported 00:06:52.477 Write Zeroes (08h): Supported LBA-Change 00:06:52.477 Dataset Management (09h): Supported LBA-Change 00:06:52.477 Unknown (0Ch): Supported 00:06:52.477 Unknown (12h): Supported 00:06:52.477 Copy (19h): Supported LBA-Change 00:06:52.477 Unknown (1Dh): Supported LBA-Change 00:06:52.477 00:06:52.477 Error Log 00:06:52.477 ========= 00:06:52.477 00:06:52.477 Arbitration 00:06:52.477 =========== 00:06:52.477 Arbitration Burst: no limit 00:06:52.477 00:06:52.477 Power Management 00:06:52.477 ================ 00:06:52.477 Number of Power States: 1 00:06:52.477 Current Power State: Power State #0 00:06:52.477 Power State #0: 00:06:52.477 Max Power: 25.00 W 00:06:52.477 Non-Operational State: Operational 00:06:52.477 Entry Latency: 16 microseconds 00:06:52.477 Exit Latency: 4 microseconds 00:06:52.477 Relative Read Throughput: 0 00:06:52.477 Relative Read Latency: 0 00:06:52.477 Relative Write Throughput: 0 00:06:52.477 Relative Write Latency: 0 00:06:52.477 Idle Power: Not Reported 00:06:52.477 Active Power: Not Reported 00:06:52.477 Non-Operational Permissive Mode: Not Supported 00:06:52.477 00:06:52.477 Health Information 00:06:52.477 ================== 00:06:52.477 Critical Warnings: 00:06:52.477 Available Spare Space: OK 00:06:52.477 Temperature: OK 00:06:52.477 Device Reliability: OK 00:06:52.477 Read Only: No 00:06:52.477 Volatile Memory Backup: OK 00:06:52.477 Current Temperature: 323 Kelvin (50 Celsius) 00:06:52.477 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:52.477 Available Spare: 0% 00:06:52.477 Available Spare Threshold: 0% 00:06:52.477 Life Percentage Used: 0% 00:06:52.477 Data Units Read: 1141 00:06:52.477 Data Units Written: 1014 00:06:52.477 Host Read Commands: 59081 00:06:52.477 Host Write Commands: 57968 00:06:52.477 Controller Busy Time: 0 minutes 00:06:52.477 Power Cycles: 0 00:06:52.477 Power On Hours: 0 hours 00:06:52.477 Unsafe Shutdowns: 0 00:06:52.477 Unrecoverable Media Errors: 0 00:06:52.477 Lifetime Error Log Entries: 0 00:06:52.477 Warning Temperature Time: 0 minutes 00:06:52.477 Critical Temperature Time: 0 minutes 00:06:52.477 00:06:52.477 Number of Queues 00:06:52.477 ================ 00:06:52.477 Number of I/O Submission Queues: 64 00:06:52.477 Number of I/O Completion Queues: 64 00:06:52.477 00:06:52.477 ZNS Specific Controller Data 00:06:52.477 ============================ 00:06:52.477 Zone Append Size Limit: 0 00:06:52.477 00:06:52.477 00:06:52.477 Active Namespaces 00:06:52.477 ================= 00:06:52.477 Namespace ID:1 00:06:52.477 Error Recovery Timeout: Unlimited 00:06:52.477 Command Set Identifier: NVM (00h) 00:06:52.477 Deallocate: Supported 00:06:52.477 Deallocated/Unwritten Error: Supported 00:06:52.477 Deallocated Read Value: All 0x00 00:06:52.477 Deallocate in Write Zeroes: Not Supported 00:06:52.477 Deallocated Guard Field: 0xFFFF 00:06:52.477 Flush: Supported 00:06:52.477 Reservation: Not Supported 00:06:52.477 Namespace Sharing Capabilities: Private 00:06:52.477 Size (in LBAs): 1310720 (5GiB) 00:06:52.477 Capacity (in LBAs): 1310720 (5GiB) 00:06:52.477 Utilization (in LBAs): 1310720 (5GiB) 00:06:52.477 Thin Provisioning: Not Supported 00:06:52.477 Per-NS Atomic Units: No 00:06:52.477 Maximum Single Source Range Length: 128 00:06:52.477 Maximum Copy Length: 128 00:06:52.477 Maximum Source Range Count: 128 00:06:52.477 NGUID/EUI64 Never Reused: No 00:06:52.477 Namespace Write Protected: No 00:06:52.477 Number of LBA Formats: 8 00:06:52.477 Current LBA Format: LBA Format #04 00:06:52.477 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:52.477 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:52.477 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:52.477 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:52.477 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:52.477 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:52.477 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:52.477 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:52.477 00:06:52.477 NVM Specific Namespace Data 00:06:52.477 =========================== 00:06:52.477 Logical Block Storage Tag Mask: 0 00:06:52.477 Protection Information Capabilities: 00:06:52.477 16b Guard Protection Information Storage Tag Support: No 00:06:52.477 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:52.477 Storage Tag Check Read Support: No 00:06:52.477 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.477 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.477 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.477 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.477 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.477 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.477 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.477 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.477 17:10:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:06:52.477 17:10:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:06:52.738 ===================================================== 00:06:52.738 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:06:52.738 ===================================================== 00:06:52.738 Controller Capabilities/Features 00:06:52.738 ================================ 00:06:52.738 Vendor ID: 1b36 00:06:52.738 Subsystem Vendor ID: 1af4 00:06:52.738 Serial Number: 12342 00:06:52.738 Model Number: QEMU NVMe Ctrl 00:06:52.738 Firmware Version: 8.0.0 00:06:52.738 Recommended Arb Burst: 6 00:06:52.738 IEEE OUI Identifier: 00 54 52 00:06:52.738 Multi-path I/O 00:06:52.738 May have multiple subsystem ports: No 00:06:52.738 May have multiple controllers: No 00:06:52.738 Associated with SR-IOV VF: No 00:06:52.738 Max Data Transfer Size: 524288 00:06:52.738 Max Number of Namespaces: 256 00:06:52.738 Max Number of I/O Queues: 64 00:06:52.738 NVMe Specification Version (VS): 1.4 00:06:52.738 NVMe Specification Version (Identify): 1.4 00:06:52.738 Maximum Queue Entries: 2048 00:06:52.738 Contiguous Queues Required: Yes 00:06:52.738 Arbitration Mechanisms Supported 00:06:52.738 Weighted Round Robin: Not Supported 00:06:52.738 Vendor Specific: Not Supported 00:06:52.738 Reset Timeout: 7500 ms 00:06:52.738 Doorbell Stride: 4 bytes 00:06:52.738 NVM Subsystem Reset: Not Supported 00:06:52.738 Command Sets Supported 00:06:52.738 NVM Command Set: Supported 00:06:52.738 Boot Partition: Not Supported 00:06:52.738 Memory Page Size Minimum: 4096 bytes 00:06:52.738 Memory Page Size Maximum: 65536 bytes 00:06:52.738 Persistent Memory Region: Not Supported 00:06:52.738 Optional Asynchronous Events Supported 00:06:52.738 Namespace Attribute Notices: Supported 00:06:52.738 Firmware Activation Notices: Not Supported 00:06:52.738 ANA Change Notices: Not Supported 00:06:52.738 PLE Aggregate Log Change Notices: Not Supported 00:06:52.738 LBA Status Info Alert Notices: Not Supported 00:06:52.738 EGE Aggregate Log Change Notices: Not Supported 00:06:52.738 Normal NVM Subsystem Shutdown event: Not Supported 00:06:52.738 Zone Descriptor Change Notices: Not Supported 00:06:52.738 Discovery Log Change Notices: Not Supported 00:06:52.738 Controller Attributes 00:06:52.738 128-bit Host Identifier: Not Supported 00:06:52.738 Non-Operational Permissive Mode: Not Supported 00:06:52.738 NVM Sets: Not Supported 00:06:52.738 Read Recovery Levels: Not Supported 00:06:52.738 Endurance Groups: Not Supported 00:06:52.738 Predictable Latency Mode: Not Supported 00:06:52.738 Traffic Based Keep ALive: Not Supported 00:06:52.738 Namespace Granularity: Not Supported 00:06:52.738 SQ Associations: Not Supported 00:06:52.738 UUID List: Not Supported 00:06:52.738 Multi-Domain Subsystem: Not Supported 00:06:52.738 Fixed Capacity Management: Not Supported 00:06:52.738 Variable Capacity Management: Not Supported 00:06:52.738 Delete Endurance Group: Not Supported 00:06:52.738 Delete NVM Set: Not Supported 00:06:52.738 Extended LBA Formats Supported: Supported 00:06:52.738 Flexible Data Placement Supported: Not Supported 00:06:52.738 00:06:52.738 Controller Memory Buffer Support 00:06:52.738 ================================ 00:06:52.738 Supported: No 00:06:52.738 00:06:52.738 Persistent Memory Region Support 00:06:52.738 ================================ 00:06:52.738 Supported: No 00:06:52.738 00:06:52.738 Admin Command Set Attributes 00:06:52.738 ============================ 00:06:52.738 Security Send/Receive: Not Supported 00:06:52.738 Format NVM: Supported 00:06:52.738 Firmware Activate/Download: Not Supported 00:06:52.738 Namespace Management: Supported 00:06:52.738 Device Self-Test: Not Supported 00:06:52.738 Directives: Supported 00:06:52.738 NVMe-MI: Not Supported 00:06:52.738 Virtualization Management: Not Supported 00:06:52.738 Doorbell Buffer Config: Supported 00:06:52.738 Get LBA Status Capability: Not Supported 00:06:52.738 Command & Feature Lockdown Capability: Not Supported 00:06:52.738 Abort Command Limit: 4 00:06:52.738 Async Event Request Limit: 4 00:06:52.738 Number of Firmware Slots: N/A 00:06:52.738 Firmware Slot 1 Read-Only: N/A 00:06:52.738 Firmware Activation Without Reset: N/A 00:06:52.738 Multiple Update Detection Support: N/A 00:06:52.738 Firmware Update Granularity: No Information Provided 00:06:52.738 Per-Namespace SMART Log: Yes 00:06:52.738 Asymmetric Namespace Access Log Page: Not Supported 00:06:52.738 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:06:52.739 Command Effects Log Page: Supported 00:06:52.739 Get Log Page Extended Data: Supported 00:06:52.739 Telemetry Log Pages: Not Supported 00:06:52.739 Persistent Event Log Pages: Not Supported 00:06:52.739 Supported Log Pages Log Page: May Support 00:06:52.739 Commands Supported & Effects Log Page: Not Supported 00:06:52.739 Feature Identifiers & Effects Log Page:May Support 00:06:52.739 NVMe-MI Commands & Effects Log Page: May Support 00:06:52.739 Data Area 4 for Telemetry Log: Not Supported 00:06:52.739 Error Log Page Entries Supported: 1 00:06:52.739 Keep Alive: Not Supported 00:06:52.739 00:06:52.739 NVM Command Set Attributes 00:06:52.739 ========================== 00:06:52.739 Submission Queue Entry Size 00:06:52.739 Max: 64 00:06:52.739 Min: 64 00:06:52.739 Completion Queue Entry Size 00:06:52.739 Max: 16 00:06:52.739 Min: 16 00:06:52.739 Number of Namespaces: 256 00:06:52.739 Compare Command: Supported 00:06:52.739 Write Uncorrectable Command: Not Supported 00:06:52.739 Dataset Management Command: Supported 00:06:52.739 Write Zeroes Command: Supported 00:06:52.739 Set Features Save Field: Supported 00:06:52.739 Reservations: Not Supported 00:06:52.739 Timestamp: Supported 00:06:52.739 Copy: Supported 00:06:52.739 Volatile Write Cache: Present 00:06:52.739 Atomic Write Unit (Normal): 1 00:06:52.739 Atomic Write Unit (PFail): 1 00:06:52.739 Atomic Compare & Write Unit: 1 00:06:52.739 Fused Compare & Write: Not Supported 00:06:52.739 Scatter-Gather List 00:06:52.739 SGL Command Set: Supported 00:06:52.739 SGL Keyed: Not Supported 00:06:52.739 SGL Bit Bucket Descriptor: Not Supported 00:06:52.739 SGL Metadata Pointer: Not Supported 00:06:52.739 Oversized SGL: Not Supported 00:06:52.739 SGL Metadata Address: Not Supported 00:06:52.739 SGL Offset: Not Supported 00:06:52.739 Transport SGL Data Block: Not Supported 00:06:52.739 Replay Protected Memory Block: Not Supported 00:06:52.739 00:06:52.739 Firmware Slot Information 00:06:52.739 ========================= 00:06:52.739 Active slot: 1 00:06:52.739 Slot 1 Firmware Revision: 1.0 00:06:52.739 00:06:52.739 00:06:52.739 Commands Supported and Effects 00:06:52.739 ============================== 00:06:52.739 Admin Commands 00:06:52.739 -------------- 00:06:52.739 Delete I/O Submission Queue (00h): Supported 00:06:52.739 Create I/O Submission Queue (01h): Supported 00:06:52.739 Get Log Page (02h): Supported 00:06:52.739 Delete I/O Completion Queue (04h): Supported 00:06:52.739 Create I/O Completion Queue (05h): Supported 00:06:52.739 Identify (06h): Supported 00:06:52.739 Abort (08h): Supported 00:06:52.739 Set Features (09h): Supported 00:06:52.739 Get Features (0Ah): Supported 00:06:52.739 Asynchronous Event Request (0Ch): Supported 00:06:52.739 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:52.739 Directive Send (19h): Supported 00:06:52.739 Directive Receive (1Ah): Supported 00:06:52.739 Virtualization Management (1Ch): Supported 00:06:52.739 Doorbell Buffer Config (7Ch): Supported 00:06:52.739 Format NVM (80h): Supported LBA-Change 00:06:52.739 I/O Commands 00:06:52.739 ------------ 00:06:52.739 Flush (00h): Supported LBA-Change 00:06:52.739 Write (01h): Supported LBA-Change 00:06:52.739 Read (02h): Supported 00:06:52.739 Compare (05h): Supported 00:06:52.739 Write Zeroes (08h): Supported LBA-Change 00:06:52.739 Dataset Management (09h): Supported LBA-Change 00:06:52.739 Unknown (0Ch): Supported 00:06:52.739 Unknown (12h): Supported 00:06:52.739 Copy (19h): Supported LBA-Change 00:06:52.739 Unknown (1Dh): Supported LBA-Change 00:06:52.739 00:06:52.739 Error Log 00:06:52.739 ========= 00:06:52.739 00:06:52.739 Arbitration 00:06:52.739 =========== 00:06:52.739 Arbitration Burst: no limit 00:06:52.739 00:06:52.739 Power Management 00:06:52.739 ================ 00:06:52.739 Number of Power States: 1 00:06:52.739 Current Power State: Power State #0 00:06:52.739 Power State #0: 00:06:52.739 Max Power: 25.00 W 00:06:52.739 Non-Operational State: Operational 00:06:52.739 Entry Latency: 16 microseconds 00:06:52.739 Exit Latency: 4 microseconds 00:06:52.739 Relative Read Throughput: 0 00:06:52.739 Relative Read Latency: 0 00:06:52.739 Relative Write Throughput: 0 00:06:52.739 Relative Write Latency: 0 00:06:52.739 Idle Power: Not Reported 00:06:52.739 Active Power: Not Reported 00:06:52.739 Non-Operational Permissive Mode: Not Supported 00:06:52.739 00:06:52.739 Health Information 00:06:52.739 ================== 00:06:52.739 Critical Warnings: 00:06:52.739 Available Spare Space: OK 00:06:52.739 Temperature: OK 00:06:52.739 Device Reliability: OK 00:06:52.739 Read Only: No 00:06:52.739 Volatile Memory Backup: OK 00:06:52.739 Current Temperature: 323 Kelvin (50 Celsius) 00:06:52.739 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:52.739 Available Spare: 0% 00:06:52.739 Available Spare Threshold: 0% 00:06:52.739 Life Percentage Used: 0% 00:06:52.739 Data Units Read: 2368 00:06:52.739 Data Units Written: 2155 00:06:52.739 Host Read Commands: 117857 00:06:52.739 Host Write Commands: 116126 00:06:52.739 Controller Busy Time: 0 minutes 00:06:52.739 Power Cycles: 0 00:06:52.739 Power On Hours: 0 hours 00:06:52.739 Unsafe Shutdowns: 0 00:06:52.739 Unrecoverable Media Errors: 0 00:06:52.739 Lifetime Error Log Entries: 0 00:06:52.739 Warning Temperature Time: 0 minutes 00:06:52.739 Critical Temperature Time: 0 minutes 00:06:52.739 00:06:52.739 Number of Queues 00:06:52.739 ================ 00:06:52.739 Number of I/O Submission Queues: 64 00:06:52.739 Number of I/O Completion Queues: 64 00:06:52.739 00:06:52.739 ZNS Specific Controller Data 00:06:52.739 ============================ 00:06:52.739 Zone Append Size Limit: 0 00:06:52.739 00:06:52.739 00:06:52.739 Active Namespaces 00:06:52.739 ================= 00:06:52.739 Namespace ID:1 00:06:52.739 Error Recovery Timeout: Unlimited 00:06:52.739 Command Set Identifier: NVM (00h) 00:06:52.739 Deallocate: Supported 00:06:52.739 Deallocated/Unwritten Error: Supported 00:06:52.739 Deallocated Read Value: All 0x00 00:06:52.739 Deallocate in Write Zeroes: Not Supported 00:06:52.739 Deallocated Guard Field: 0xFFFF 00:06:52.739 Flush: Supported 00:06:52.739 Reservation: Not Supported 00:06:52.739 Namespace Sharing Capabilities: Private 00:06:52.739 Size (in LBAs): 1048576 (4GiB) 00:06:52.739 Capacity (in LBAs): 1048576 (4GiB) 00:06:52.739 Utilization (in LBAs): 1048576 (4GiB) 00:06:52.739 Thin Provisioning: Not Supported 00:06:52.739 Per-NS Atomic Units: No 00:06:52.739 Maximum Single Source Range Length: 128 00:06:52.739 Maximum Copy Length: 128 00:06:52.739 Maximum Source Range Count: 128 00:06:52.739 NGUID/EUI64 Never Reused: No 00:06:52.739 Namespace Write Protected: No 00:06:52.739 Number of LBA Formats: 8 00:06:52.739 Current LBA Format: LBA Format #04 00:06:52.739 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:52.739 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:52.739 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:52.739 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:52.739 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:52.739 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:52.739 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:52.739 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:52.739 00:06:52.739 NVM Specific Namespace Data 00:06:52.739 =========================== 00:06:52.739 Logical Block Storage Tag Mask: 0 00:06:52.739 Protection Information Capabilities: 00:06:52.739 16b Guard Protection Information Storage Tag Support: No 00:06:52.739 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:52.739 Storage Tag Check Read Support: No 00:06:52.739 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.739 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.739 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.739 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.739 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.739 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.739 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.739 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.739 Namespace ID:2 00:06:52.739 Error Recovery Timeout: Unlimited 00:06:52.739 Command Set Identifier: NVM (00h) 00:06:52.739 Deallocate: Supported 00:06:52.739 Deallocated/Unwritten Error: Supported 00:06:52.739 Deallocated Read Value: All 0x00 00:06:52.739 Deallocate in Write Zeroes: Not Supported 00:06:52.739 Deallocated Guard Field: 0xFFFF 00:06:52.739 Flush: Supported 00:06:52.739 Reservation: Not Supported 00:06:52.739 Namespace Sharing Capabilities: Private 00:06:52.739 Size (in LBAs): 1048576 (4GiB) 00:06:52.739 Capacity (in LBAs): 1048576 (4GiB) 00:06:52.739 Utilization (in LBAs): 1048576 (4GiB) 00:06:52.739 Thin Provisioning: Not Supported 00:06:52.739 Per-NS Atomic Units: No 00:06:52.739 Maximum Single Source Range Length: 128 00:06:52.739 Maximum Copy Length: 128 00:06:52.739 Maximum Source Range Count: 128 00:06:52.740 NGUID/EUI64 Never Reused: No 00:06:52.740 Namespace Write Protected: No 00:06:52.740 Number of LBA Formats: 8 00:06:52.740 Current LBA Format: LBA Format #04 00:06:52.740 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:52.740 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:52.740 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:52.740 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:52.740 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:52.740 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:52.740 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:52.740 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:52.740 00:06:52.740 NVM Specific Namespace Data 00:06:52.740 =========================== 00:06:52.740 Logical Block Storage Tag Mask: 0 00:06:52.740 Protection Information Capabilities: 00:06:52.740 16b Guard Protection Information Storage Tag Support: No 00:06:52.740 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:52.740 Storage Tag Check Read Support: No 00:06:52.740 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.740 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.740 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.740 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.740 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.740 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.740 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.740 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.740 Namespace ID:3 00:06:52.740 Error Recovery Timeout: Unlimited 00:06:52.740 Command Set Identifier: NVM (00h) 00:06:52.740 Deallocate: Supported 00:06:52.740 Deallocated/Unwritten Error: Supported 00:06:52.740 Deallocated Read Value: All 0x00 00:06:52.740 Deallocate in Write Zeroes: Not Supported 00:06:52.740 Deallocated Guard Field: 0xFFFF 00:06:52.740 Flush: Supported 00:06:52.740 Reservation: Not Supported 00:06:52.740 Namespace Sharing Capabilities: Private 00:06:52.740 Size (in LBAs): 1048576 (4GiB) 00:06:52.740 Capacity (in LBAs): 1048576 (4GiB) 00:06:52.740 Utilization (in LBAs): 1048576 (4GiB) 00:06:52.740 Thin Provisioning: Not Supported 00:06:52.740 Per-NS Atomic Units: No 00:06:52.740 Maximum Single Source Range Length: 128 00:06:52.740 Maximum Copy Length: 128 00:06:52.740 Maximum Source Range Count: 128 00:06:52.740 NGUID/EUI64 Never Reused: No 00:06:52.740 Namespace Write Protected: No 00:06:52.740 Number of LBA Formats: 8 00:06:52.740 Current LBA Format: LBA Format #04 00:06:52.740 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:52.740 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:52.740 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:52.740 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:52.740 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:52.740 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:52.740 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:52.740 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:52.740 00:06:52.740 NVM Specific Namespace Data 00:06:52.740 =========================== 00:06:52.740 Logical Block Storage Tag Mask: 0 00:06:52.740 Protection Information Capabilities: 00:06:52.740 16b Guard Protection Information Storage Tag Support: No 00:06:52.740 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:52.740 Storage Tag Check Read Support: No 00:06:52.740 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.740 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.740 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.740 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.740 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.740 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.740 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.740 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:52.740 17:10:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:06:52.740 17:10:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:06:52.740 ===================================================== 00:06:52.740 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:06:52.740 ===================================================== 00:06:52.740 Controller Capabilities/Features 00:06:52.740 ================================ 00:06:52.740 Vendor ID: 1b36 00:06:52.740 Subsystem Vendor ID: 1af4 00:06:52.740 Serial Number: 12343 00:06:52.740 Model Number: QEMU NVMe Ctrl 00:06:52.740 Firmware Version: 8.0.0 00:06:52.740 Recommended Arb Burst: 6 00:06:52.740 IEEE OUI Identifier: 00 54 52 00:06:52.740 Multi-path I/O 00:06:52.740 May have multiple subsystem ports: No 00:06:52.740 May have multiple controllers: Yes 00:06:52.740 Associated with SR-IOV VF: No 00:06:52.740 Max Data Transfer Size: 524288 00:06:52.740 Max Number of Namespaces: 256 00:06:52.740 Max Number of I/O Queues: 64 00:06:52.740 NVMe Specification Version (VS): 1.4 00:06:52.740 NVMe Specification Version (Identify): 1.4 00:06:52.740 Maximum Queue Entries: 2048 00:06:52.740 Contiguous Queues Required: Yes 00:06:52.740 Arbitration Mechanisms Supported 00:06:52.740 Weighted Round Robin: Not Supported 00:06:52.740 Vendor Specific: Not Supported 00:06:52.740 Reset Timeout: 7500 ms 00:06:52.740 Doorbell Stride: 4 bytes 00:06:52.740 NVM Subsystem Reset: Not Supported 00:06:52.740 Command Sets Supported 00:06:52.740 NVM Command Set: Supported 00:06:52.740 Boot Partition: Not Supported 00:06:52.740 Memory Page Size Minimum: 4096 bytes 00:06:52.740 Memory Page Size Maximum: 65536 bytes 00:06:52.740 Persistent Memory Region: Not Supported 00:06:52.740 Optional Asynchronous Events Supported 00:06:52.740 Namespace Attribute Notices: Supported 00:06:52.740 Firmware Activation Notices: Not Supported 00:06:52.740 ANA Change Notices: Not Supported 00:06:52.740 PLE Aggregate Log Change Notices: Not Supported 00:06:52.740 LBA Status Info Alert Notices: Not Supported 00:06:52.740 EGE Aggregate Log Change Notices: Not Supported 00:06:52.740 Normal NVM Subsystem Shutdown event: Not Supported 00:06:52.740 Zone Descriptor Change Notices: Not Supported 00:06:52.740 Discovery Log Change Notices: Not Supported 00:06:52.740 Controller Attributes 00:06:52.740 128-bit Host Identifier: Not Supported 00:06:52.740 Non-Operational Permissive Mode: Not Supported 00:06:52.740 NVM Sets: Not Supported 00:06:52.740 Read Recovery Levels: Not Supported 00:06:52.740 Endurance Groups: Supported 00:06:52.740 Predictable Latency Mode: Not Supported 00:06:52.740 Traffic Based Keep ALive: Not Supported 00:06:52.740 Namespace Granularity: Not Supported 00:06:52.740 SQ Associations: Not Supported 00:06:52.740 UUID List: Not Supported 00:06:52.740 Multi-Domain Subsystem: Not Supported 00:06:52.740 Fixed Capacity Management: Not Supported 00:06:52.740 Variable Capacity Management: Not Supported 00:06:52.740 Delete Endurance Group: Not Supported 00:06:52.740 Delete NVM Set: Not Supported 00:06:52.740 Extended LBA Formats Supported: Supported 00:06:52.740 Flexible Data Placement Supported: Supported 00:06:52.740 00:06:52.740 Controller Memory Buffer Support 00:06:52.740 ================================ 00:06:52.740 Supported: No 00:06:52.740 00:06:52.740 Persistent Memory Region Support 00:06:52.740 ================================ 00:06:52.740 Supported: No 00:06:52.740 00:06:52.740 Admin Command Set Attributes 00:06:52.740 ============================ 00:06:52.740 Security Send/Receive: Not Supported 00:06:52.740 Format NVM: Supported 00:06:52.740 Firmware Activate/Download: Not Supported 00:06:52.740 Namespace Management: Supported 00:06:52.740 Device Self-Test: Not Supported 00:06:52.740 Directives: Supported 00:06:52.740 NVMe-MI: Not Supported 00:06:52.740 Virtualization Management: Not Supported 00:06:52.740 Doorbell Buffer Config: Supported 00:06:52.740 Get LBA Status Capability: Not Supported 00:06:52.740 Command & Feature Lockdown Capability: Not Supported 00:06:52.740 Abort Command Limit: 4 00:06:52.740 Async Event Request Limit: 4 00:06:52.740 Number of Firmware Slots: N/A 00:06:52.740 Firmware Slot 1 Read-Only: N/A 00:06:52.740 Firmware Activation Without Reset: N/A 00:06:52.740 Multiple Update Detection Support: N/A 00:06:52.740 Firmware Update Granularity: No Information Provided 00:06:52.740 Per-Namespace SMART Log: Yes 00:06:52.740 Asymmetric Namespace Access Log Page: Not Supported 00:06:52.740 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:06:52.740 Command Effects Log Page: Supported 00:06:52.740 Get Log Page Extended Data: Supported 00:06:52.740 Telemetry Log Pages: Not Supported 00:06:52.740 Persistent Event Log Pages: Not Supported 00:06:52.740 Supported Log Pages Log Page: May Support 00:06:52.740 Commands Supported & Effects Log Page: Not Supported 00:06:52.740 Feature Identifiers & Effects Log Page:May Support 00:06:52.740 NVMe-MI Commands & Effects Log Page: May Support 00:06:52.740 Data Area 4 for Telemetry Log: Not Supported 00:06:52.740 Error Log Page Entries Supported: 1 00:06:52.740 Keep Alive: Not Supported 00:06:52.740 00:06:52.740 NVM Command Set Attributes 00:06:52.740 ========================== 00:06:52.740 Submission Queue Entry Size 00:06:52.740 Max: 64 00:06:52.740 Min: 64 00:06:52.740 Completion Queue Entry Size 00:06:52.741 Max: 16 00:06:52.741 Min: 16 00:06:52.741 Number of Namespaces: 256 00:06:52.741 Compare Command: Supported 00:06:52.741 Write Uncorrectable Command: Not Supported 00:06:52.741 Dataset Management Command: Supported 00:06:52.741 Write Zeroes Command: Supported 00:06:52.741 Set Features Save Field: Supported 00:06:52.741 Reservations: Not Supported 00:06:52.741 Timestamp: Supported 00:06:52.741 Copy: Supported 00:06:52.741 Volatile Write Cache: Present 00:06:52.741 Atomic Write Unit (Normal): 1 00:06:52.741 Atomic Write Unit (PFail): 1 00:06:52.741 Atomic Compare & Write Unit: 1 00:06:52.741 Fused Compare & Write: Not Supported 00:06:52.741 Scatter-Gather List 00:06:52.741 SGL Command Set: Supported 00:06:52.741 SGL Keyed: Not Supported 00:06:52.741 SGL Bit Bucket Descriptor: Not Supported 00:06:52.741 SGL Metadata Pointer: Not Supported 00:06:52.741 Oversized SGL: Not Supported 00:06:52.741 SGL Metadata Address: Not Supported 00:06:52.741 SGL Offset: Not Supported 00:06:52.741 Transport SGL Data Block: Not Supported 00:06:52.741 Replay Protected Memory Block: Not Supported 00:06:52.741 00:06:52.741 Firmware Slot Information 00:06:52.741 ========================= 00:06:52.741 Active slot: 1 00:06:52.741 Slot 1 Firmware Revision: 1.0 00:06:52.741 00:06:52.741 00:06:52.741 Commands Supported and Effects 00:06:52.741 ============================== 00:06:52.741 Admin Commands 00:06:52.741 -------------- 00:06:52.741 Delete I/O Submission Queue (00h): Supported 00:06:52.741 Create I/O Submission Queue (01h): Supported 00:06:52.741 Get Log Page (02h): Supported 00:06:52.741 Delete I/O Completion Queue (04h): Supported 00:06:52.741 Create I/O Completion Queue (05h): Supported 00:06:52.741 Identify (06h): Supported 00:06:52.741 Abort (08h): Supported 00:06:52.741 Set Features (09h): Supported 00:06:52.741 Get Features (0Ah): Supported 00:06:52.741 Asynchronous Event Request (0Ch): Supported 00:06:52.741 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:52.741 Directive Send (19h): Supported 00:06:52.741 Directive Receive (1Ah): Supported 00:06:52.741 Virtualization Management (1Ch): Supported 00:06:52.741 Doorbell Buffer Config (7Ch): Supported 00:06:52.741 Format NVM (80h): Supported LBA-Change 00:06:52.741 I/O Commands 00:06:52.741 ------------ 00:06:52.741 Flush (00h): Supported LBA-Change 00:06:52.741 Write (01h): Supported LBA-Change 00:06:52.741 Read (02h): Supported 00:06:52.741 Compare (05h): Supported 00:06:52.741 Write Zeroes (08h): Supported LBA-Change 00:06:52.741 Dataset Management (09h): Supported LBA-Change 00:06:52.741 Unknown (0Ch): Supported 00:06:52.741 Unknown (12h): Supported 00:06:52.741 Copy (19h): Supported LBA-Change 00:06:52.741 Unknown (1Dh): Supported LBA-Change 00:06:52.741 00:06:52.741 Error Log 00:06:52.741 ========= 00:06:52.741 00:06:52.741 Arbitration 00:06:52.741 =========== 00:06:52.741 Arbitration Burst: no limit 00:06:52.741 00:06:52.741 Power Management 00:06:52.741 ================ 00:06:52.741 Number of Power States: 1 00:06:52.741 Current Power State: Power State #0 00:06:52.741 Power State #0: 00:06:52.741 Max Power: 25.00 W 00:06:52.741 Non-Operational State: Operational 00:06:52.741 Entry Latency: 16 microseconds 00:06:52.741 Exit Latency: 4 microseconds 00:06:52.741 Relative Read Throughput: 0 00:06:52.741 Relative Read Latency: 0 00:06:52.741 Relative Write Throughput: 0 00:06:52.741 Relative Write Latency: 0 00:06:52.741 Idle Power: Not Reported 00:06:52.741 Active Power: Not Reported 00:06:52.741 Non-Operational Permissive Mode: Not Supported 00:06:52.741 00:06:52.741 Health Information 00:06:52.741 ================== 00:06:52.741 Critical Warnings: 00:06:52.741 Available Spare Space: OK 00:06:52.741 Temperature: OK 00:06:52.741 Device Reliability: OK 00:06:52.741 Read Only: No 00:06:52.741 Volatile Memory Backup: OK 00:06:52.741 Current Temperature: 323 Kelvin (50 Celsius) 00:06:52.741 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:52.741 Available Spare: 0% 00:06:52.741 Available Spare Threshold: 0% 00:06:52.741 Life Percentage Used: 0% 00:06:52.741 Data Units Read: 928 00:06:52.741 Data Units Written: 858 00:06:52.741 Host Read Commands: 40631 00:06:52.741 Host Write Commands: 40054 00:06:52.741 Controller Busy Time: 0 minutes 00:06:52.741 Power Cycles: 0 00:06:52.741 Power On Hours: 0 hours 00:06:52.741 Unsafe Shutdowns: 0 00:06:52.741 Unrecoverable Media Errors: 0 00:06:52.741 Lifetime Error Log Entries: 0 00:06:52.741 Warning Temperature Time: 0 minutes 00:06:52.741 Critical Temperature Time: 0 minutes 00:06:52.741 00:06:52.741 Number of Queues 00:06:52.741 ================ 00:06:52.741 Number of I/O Submission Queues: 64 00:06:52.741 Number of I/O Completion Queues: 64 00:06:52.741 00:06:52.741 ZNS Specific Controller Data 00:06:52.741 ============================ 00:06:52.741 Zone Append Size Limit: 0 00:06:52.741 00:06:52.741 00:06:52.741 Active Namespaces 00:06:52.741 ================= 00:06:52.741 Namespace ID:1 00:06:52.741 Error Recovery Timeout: Unlimited 00:06:52.741 Command Set Identifier: NVM (00h) 00:06:52.741 Deallocate: Supported 00:06:52.741 Deallocated/Unwritten Error: Supported 00:06:52.741 Deallocated Read Value: All 0x00 00:06:52.741 Deallocate in Write Zeroes: Not Supported 00:06:52.741 Deallocated Guard Field: 0xFFFF 00:06:52.741 Flush: Supported 00:06:52.741 Reservation: Not Supported 00:06:52.741 Namespace Sharing Capabilities: Multiple Controllers 00:06:52.741 Size (in LBAs): 262144 (1GiB) 00:06:52.741 Capacity (in LBAs): 262144 (1GiB) 00:06:52.741 Utilization (in LBAs): 262144 (1GiB) 00:06:52.741 Thin Provisioning: Not Supported 00:06:52.741 Per-NS Atomic Units: No 00:06:52.741 Maximum Single Source Range Length: 128 00:06:52.741 Maximum Copy Length: 128 00:06:52.741 Maximum Source Range Count: 128 00:06:52.741 NGUID/EUI64 Never Reused: No 00:06:52.741 Namespace Write Protected: No 00:06:52.741 Endurance group ID: 1 00:06:52.741 Number of LBA Formats: 8 00:06:52.741 Current LBA Format: LBA Format #04 00:06:52.741 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:52.741 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:52.741 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:52.741 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:52.741 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:52.741 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:52.741 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:52.741 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:52.741 00:06:52.741 Get Feature FDP: 00:06:52.741 ================ 00:06:52.741 Enabled: Yes 00:06:52.741 FDP configuration index: 0 00:06:52.741 00:06:52.741 FDP configurations log page 00:06:52.741 =========================== 00:06:52.741 Number of FDP configurations: 1 00:06:52.741 Version: 0 00:06:52.741 Size: 112 00:06:52.741 FDP Configuration Descriptor: 0 00:06:52.741 Descriptor Size: 96 00:06:52.741 Reclaim Group Identifier format: 2 00:06:52.741 FDP Volatile Write Cache: Not Present 00:06:52.741 FDP Configuration: Valid 00:06:52.741 Vendor Specific Size: 0 00:06:52.741 Number of Reclaim Groups: 2 00:06:52.741 Number of Recalim Unit Handles: 8 00:06:52.741 Max Placement Identifiers: 128 00:06:52.741 Number of Namespaces Suppprted: 256 00:06:52.741 Reclaim unit Nominal Size: 6000000 bytes 00:06:52.741 Estimated Reclaim Unit Time Limit: Not Reported 00:06:52.741 RUH Desc #000: RUH Type: Initially Isolated 00:06:52.741 RUH Desc #001: RUH Type: Initially Isolated 00:06:52.741 RUH Desc #002: RUH Type: Initially Isolated 00:06:52.741 RUH Desc #003: RUH Type: Initially Isolated 00:06:52.741 RUH Desc #004: RUH Type: Initially Isolated 00:06:52.741 RUH Desc #005: RUH Type: Initially Isolated 00:06:52.741 RUH Desc #006: RUH Type: Initially Isolated 00:06:52.741 RUH Desc #007: RUH Type: Initially Isolated 00:06:52.741 00:06:52.741 FDP reclaim unit handle usage log page 00:06:52.999 ====================================== 00:06:52.999 Number of Reclaim Unit Handles: 8 00:06:52.999 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:06:52.999 RUH Usage Desc #001: RUH Attributes: Unused 00:06:52.999 RUH Usage Desc #002: RUH Attributes: Unused 00:06:52.999 RUH Usage Desc #003: RUH Attributes: Unused 00:06:52.999 RUH Usage Desc #004: RUH Attributes: Unused 00:06:52.999 RUH Usage Desc #005: RUH Attributes: Unused 00:06:52.999 RUH Usage Desc #006: RUH Attributes: Unused 00:06:52.999 RUH Usage Desc #007: RUH Attributes: Unused 00:06:52.999 00:06:52.999 FDP statistics log page 00:06:52.999 ======================= 00:06:52.999 Host bytes with metadata written: 548380672 00:06:52.999 Media bytes with metadata written: 548458496 00:06:52.999 Media bytes erased: 0 00:06:52.999 00:06:52.999 FDP events log page 00:06:52.999 =================== 00:06:52.999 Number of FDP events: 0 00:06:52.999 00:06:52.999 NVM Specific Namespace Data 00:06:52.999 =========================== 00:06:52.999 Logical Block Storage Tag Mask: 0 00:06:53.000 Protection Information Capabilities: 00:06:53.000 16b Guard Protection Information Storage Tag Support: No 00:06:53.000 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:53.000 Storage Tag Check Read Support: No 00:06:53.000 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:53.000 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:53.000 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:53.000 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:53.000 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:53.000 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:53.000 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:53.000 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:53.000 00:06:53.000 real 0m1.191s 00:06:53.000 user 0m0.397s 00:06:53.000 sys 0m0.559s 00:06:53.000 17:10:35 nvme.nvme_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:53.000 ************************************ 00:06:53.000 17:10:35 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:06:53.000 END TEST nvme_identify 00:06:53.000 ************************************ 00:06:53.000 17:10:35 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:06:53.000 17:10:35 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:53.000 17:10:35 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:53.000 17:10:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:53.000 ************************************ 00:06:53.000 START TEST nvme_perf 00:06:53.000 ************************************ 00:06:53.000 17:10:35 nvme.nvme_perf -- common/autotest_common.sh@1127 -- # nvme_perf 00:06:53.000 17:10:35 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:06:54.378 Initializing NVMe Controllers 00:06:54.378 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:06:54.378 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:06:54.378 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:06:54.378 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:06:54.378 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:06:54.378 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:06:54.378 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:06:54.378 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:06:54.378 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:06:54.378 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:06:54.378 Initialization complete. Launching workers. 00:06:54.378 ======================================================== 00:06:54.378 Latency(us) 00:06:54.378 Device Information : IOPS MiB/s Average min max 00:06:54.378 PCIE (0000:00:10.0) NSID 1 from core 0: 19359.38 226.87 6620.00 5479.52 32419.71 00:06:54.378 PCIE (0000:00:11.0) NSID 1 from core 0: 19359.38 226.87 6611.12 5571.04 30653.80 00:06:54.378 PCIE (0000:00:13.0) NSID 1 from core 0: 19359.38 226.87 6600.98 5559.20 29254.13 00:06:54.378 PCIE (0000:00:12.0) NSID 1 from core 0: 19359.38 226.87 6590.50 5559.04 27370.54 00:06:54.378 PCIE (0000:00:12.0) NSID 2 from core 0: 19359.38 226.87 6580.49 5536.07 25597.26 00:06:54.378 PCIE (0000:00:12.0) NSID 3 from core 0: 19423.27 227.62 6548.94 5567.74 20667.85 00:06:54.378 ======================================================== 00:06:54.378 Total : 116220.17 1361.96 6591.98 5479.52 32419.71 00:06:54.378 00:06:54.378 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:06:54.378 ================================================================================= 00:06:54.378 1.00000% : 5595.766us 00:06:54.378 10.00000% : 5797.415us 00:06:54.378 25.00000% : 5999.065us 00:06:54.378 50.00000% : 6276.332us 00:06:54.378 75.00000% : 6604.012us 00:06:54.378 90.00000% : 7108.135us 00:06:54.378 95.00000% : 8922.978us 00:06:54.378 98.00000% : 10586.585us 00:06:54.378 99.00000% : 11796.480us 00:06:54.378 99.50000% : 27424.295us 00:06:54.378 99.90000% : 32062.228us 00:06:54.378 99.99000% : 32465.526us 00:06:54.378 99.99900% : 32465.526us 00:06:54.378 99.99990% : 32465.526us 00:06:54.378 99.99999% : 32465.526us 00:06:54.378 00:06:54.378 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:06:54.378 ================================================================================= 00:06:54.378 1.00000% : 5671.385us 00:06:54.378 10.00000% : 5847.828us 00:06:54.378 25.00000% : 6024.271us 00:06:54.378 50.00000% : 6276.332us 00:06:54.378 75.00000% : 6553.600us 00:06:54.378 90.00000% : 7108.135us 00:06:54.378 95.00000% : 8922.978us 00:06:54.378 98.00000% : 10435.348us 00:06:54.378 99.00000% : 11393.182us 00:06:54.378 99.50000% : 25609.452us 00:06:54.378 99.90000% : 30247.385us 00:06:54.378 99.99000% : 30650.683us 00:06:54.378 99.99900% : 30852.332us 00:06:54.378 99.99990% : 30852.332us 00:06:54.378 99.99999% : 30852.332us 00:06:54.378 00:06:54.378 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:06:54.378 ================================================================================= 00:06:54.378 1.00000% : 5671.385us 00:06:54.378 10.00000% : 5847.828us 00:06:54.378 25.00000% : 6024.271us 00:06:54.378 50.00000% : 6276.332us 00:06:54.378 75.00000% : 6553.600us 00:06:54.378 90.00000% : 7007.311us 00:06:54.378 95.00000% : 8872.566us 00:06:54.378 98.00000% : 10485.760us 00:06:54.378 99.00000% : 11695.655us 00:06:54.378 99.50000% : 24097.083us 00:06:54.378 99.90000% : 28835.840us 00:06:54.378 99.99000% : 29239.138us 00:06:54.378 99.99900% : 29440.788us 00:06:54.378 99.99990% : 29440.788us 00:06:54.378 99.99999% : 29440.788us 00:06:54.378 00:06:54.378 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:06:54.378 ================================================================================= 00:06:54.378 1.00000% : 5671.385us 00:06:54.378 10.00000% : 5847.828us 00:06:54.378 25.00000% : 6024.271us 00:06:54.378 50.00000% : 6276.332us 00:06:54.378 75.00000% : 6553.600us 00:06:54.378 90.00000% : 7057.723us 00:06:54.378 95.00000% : 8822.154us 00:06:54.378 98.00000% : 10384.935us 00:06:54.378 99.00000% : 11645.243us 00:06:54.378 99.50000% : 22383.065us 00:06:54.378 99.90000% : 27020.997us 00:06:54.378 99.99000% : 27424.295us 00:06:54.378 99.99900% : 27424.295us 00:06:54.378 99.99990% : 27424.295us 00:06:54.378 99.99999% : 27424.295us 00:06:54.378 00:06:54.378 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:06:54.378 ================================================================================= 00:06:54.378 1.00000% : 5671.385us 00:06:54.378 10.00000% : 5847.828us 00:06:54.378 25.00000% : 6024.271us 00:06:54.378 50.00000% : 6276.332us 00:06:54.378 75.00000% : 6553.600us 00:06:54.378 90.00000% : 7208.960us 00:06:54.378 95.00000% : 8872.566us 00:06:54.378 98.00000% : 10485.760us 00:06:54.378 99.00000% : 11645.243us 00:06:54.378 99.50000% : 20568.222us 00:06:54.378 99.90000% : 25206.154us 00:06:54.378 99.99000% : 25609.452us 00:06:54.378 99.99900% : 25609.452us 00:06:54.378 99.99990% : 25609.452us 00:06:54.378 99.99999% : 25609.452us 00:06:54.378 00:06:54.378 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:06:54.378 ================================================================================= 00:06:54.378 1.00000% : 5671.385us 00:06:54.378 10.00000% : 5847.828us 00:06:54.378 25.00000% : 6024.271us 00:06:54.378 50.00000% : 6276.332us 00:06:54.378 75.00000% : 6553.600us 00:06:54.378 90.00000% : 7208.960us 00:06:54.378 95.00000% : 8922.978us 00:06:54.378 98.00000% : 10586.585us 00:06:54.378 99.00000% : 11796.480us 00:06:54.378 99.50000% : 15526.991us 00:06:54.378 99.90000% : 20265.748us 00:06:54.378 99.99000% : 20669.046us 00:06:54.378 99.99900% : 20669.046us 00:06:54.378 99.99990% : 20669.046us 00:06:54.378 99.99999% : 20669.046us 00:06:54.378 00:06:54.378 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:06:54.378 ============================================================================== 00:06:54.378 Range in us Cumulative IO count 00:06:54.378 5469.735 - 5494.942: 0.0103% ( 2) 00:06:54.378 5494.942 - 5520.148: 0.0980% ( 17) 00:06:54.378 5520.148 - 5545.354: 0.2578% ( 31) 00:06:54.378 5545.354 - 5570.560: 0.6033% ( 67) 00:06:54.378 5570.560 - 5595.766: 1.0623% ( 89) 00:06:54.378 5595.766 - 5620.972: 1.6450% ( 113) 00:06:54.378 5620.972 - 5646.178: 2.4185% ( 150) 00:06:54.378 5646.178 - 5671.385: 3.4757% ( 205) 00:06:54.378 5671.385 - 5696.591: 4.6050% ( 219) 00:06:54.378 5696.591 - 5721.797: 6.0695% ( 284) 00:06:54.378 5721.797 - 5747.003: 7.6733% ( 311) 00:06:54.378 5747.003 - 5772.209: 9.3647% ( 328) 00:06:54.378 5772.209 - 5797.415: 11.0613% ( 329) 00:06:54.378 5797.415 - 5822.622: 12.8919% ( 355) 00:06:54.378 5822.622 - 5847.828: 14.8515% ( 380) 00:06:54.378 5847.828 - 5873.034: 16.8162% ( 381) 00:06:54.378 5873.034 - 5898.240: 18.8480% ( 394) 00:06:54.378 5898.240 - 5923.446: 20.7921% ( 377) 00:06:54.378 5923.446 - 5948.652: 22.8393% ( 397) 00:06:54.378 5948.652 - 5973.858: 24.9278% ( 405) 00:06:54.378 5973.858 - 5999.065: 27.1607% ( 433) 00:06:54.378 5999.065 - 6024.271: 29.0274% ( 362) 00:06:54.378 6024.271 - 6049.477: 31.3119% ( 443) 00:06:54.378 6049.477 - 6074.683: 33.4262% ( 410) 00:06:54.378 6074.683 - 6099.889: 35.5611% ( 414) 00:06:54.378 6099.889 - 6125.095: 37.7269% ( 420) 00:06:54.378 6125.095 - 6150.302: 39.9598% ( 433) 00:06:54.378 6150.302 - 6175.508: 42.0947% ( 414) 00:06:54.378 6175.508 - 6200.714: 44.2347% ( 415) 00:06:54.378 6200.714 - 6225.920: 46.4470% ( 429) 00:06:54.378 6225.920 - 6251.126: 48.6231% ( 422) 00:06:54.378 6251.126 - 6276.332: 50.8302% ( 428) 00:06:54.378 6276.332 - 6301.538: 52.9806% ( 417) 00:06:54.378 6301.538 - 6326.745: 55.1774% ( 426) 00:06:54.378 6326.745 - 6351.951: 57.3742% ( 426) 00:06:54.378 6351.951 - 6377.157: 59.6638% ( 444) 00:06:54.378 6377.157 - 6402.363: 61.7523% ( 405) 00:06:54.378 6402.363 - 6427.569: 64.0058% ( 437) 00:06:54.378 6427.569 - 6452.775: 66.2438% ( 434) 00:06:54.378 6452.775 - 6503.188: 70.7199% ( 868) 00:06:54.378 6503.188 - 6553.600: 74.8144% ( 794) 00:06:54.378 6553.600 - 6604.012: 78.5118% ( 717) 00:06:54.378 6604.012 - 6654.425: 81.6780% ( 614) 00:06:54.378 6654.425 - 6704.837: 84.0295% ( 456) 00:06:54.378 6704.837 - 6755.249: 85.8859% ( 360) 00:06:54.378 6755.249 - 6805.662: 87.0668% ( 229) 00:06:54.378 6805.662 - 6856.074: 87.9332% ( 168) 00:06:54.378 6856.074 - 6906.486: 88.5932% ( 128) 00:06:54.378 6906.486 - 6956.898: 89.1038% ( 99) 00:06:54.378 6956.898 - 7007.311: 89.5008% ( 77) 00:06:54.378 7007.311 - 7057.723: 89.7741% ( 53) 00:06:54.378 7057.723 - 7108.135: 90.0474% ( 53) 00:06:54.378 7108.135 - 7158.548: 90.2640% ( 42) 00:06:54.378 7158.548 - 7208.960: 90.5270% ( 51) 00:06:54.378 7208.960 - 7259.372: 90.7488% ( 43) 00:06:54.378 7259.372 - 7309.785: 90.9602% ( 41) 00:06:54.379 7309.785 - 7360.197: 91.1355% ( 34) 00:06:54.379 7360.197 - 7410.609: 91.3469% ( 41) 00:06:54.379 7410.609 - 7461.022: 91.5274% ( 35) 00:06:54.379 7461.022 - 7511.434: 91.6718% ( 28) 00:06:54.379 7511.434 - 7561.846: 91.7904% ( 23) 00:06:54.379 7561.846 - 7612.258: 91.9142% ( 24) 00:06:54.379 7612.258 - 7662.671: 92.0173% ( 20) 00:06:54.379 7662.671 - 7713.083: 92.1153% ( 19) 00:06:54.379 7713.083 - 7763.495: 92.2339% ( 23) 00:06:54.379 7763.495 - 7813.908: 92.3577% ( 24) 00:06:54.379 7813.908 - 7864.320: 92.4763% ( 23) 00:06:54.379 7864.320 - 7914.732: 92.6000% ( 24) 00:06:54.379 7914.732 - 7965.145: 92.7238% ( 24) 00:06:54.379 7965.145 - 8015.557: 92.8785% ( 30) 00:06:54.379 8015.557 - 8065.969: 92.9971% ( 23) 00:06:54.379 8065.969 - 8116.382: 93.1570% ( 31) 00:06:54.379 8116.382 - 8166.794: 93.2756% ( 23) 00:06:54.379 8166.794 - 8217.206: 93.3839% ( 21) 00:06:54.379 8217.206 - 8267.618: 93.5231% ( 27) 00:06:54.379 8267.618 - 8318.031: 93.6056% ( 16) 00:06:54.379 8318.031 - 8368.443: 93.7139% ( 21) 00:06:54.379 8368.443 - 8418.855: 93.7964% ( 16) 00:06:54.379 8418.855 - 8469.268: 93.9253% ( 25) 00:06:54.379 8469.268 - 8519.680: 94.0336% ( 21) 00:06:54.379 8519.680 - 8570.092: 94.1625% ( 25) 00:06:54.379 8570.092 - 8620.505: 94.2966% ( 26) 00:06:54.379 8620.505 - 8670.917: 94.4152% ( 23) 00:06:54.379 8670.917 - 8721.329: 94.5338% ( 23) 00:06:54.379 8721.329 - 8771.742: 94.6731% ( 27) 00:06:54.379 8771.742 - 8822.154: 94.8020% ( 25) 00:06:54.379 8822.154 - 8872.566: 94.9206% ( 23) 00:06:54.379 8872.566 - 8922.978: 95.0392% ( 23) 00:06:54.379 8922.978 - 8973.391: 95.1681% ( 25) 00:06:54.379 8973.391 - 9023.803: 95.2764% ( 21) 00:06:54.379 9023.803 - 9074.215: 95.4259% ( 29) 00:06:54.379 9074.215 - 9124.628: 95.5910% ( 32) 00:06:54.379 9124.628 - 9175.040: 95.7405% ( 29) 00:06:54.379 9175.040 - 9225.452: 95.8488% ( 21) 00:06:54.379 9225.452 - 9275.865: 95.9726% ( 24) 00:06:54.379 9275.865 - 9326.277: 96.0809% ( 21) 00:06:54.379 9326.277 - 9376.689: 96.1995% ( 23) 00:06:54.379 9376.689 - 9427.102: 96.2923% ( 18) 00:06:54.379 9427.102 - 9477.514: 96.3800% ( 17) 00:06:54.379 9477.514 - 9527.926: 96.4676% ( 17) 00:06:54.379 9527.926 - 9578.338: 96.5604% ( 18) 00:06:54.379 9578.338 - 9628.751: 96.6584% ( 19) 00:06:54.379 9628.751 - 9679.163: 96.7409% ( 16) 00:06:54.379 9679.163 - 9729.575: 96.8234% ( 16) 00:06:54.379 9729.575 - 9779.988: 96.8853% ( 12) 00:06:54.379 9779.988 - 9830.400: 96.9524% ( 13) 00:06:54.379 9830.400 - 9880.812: 97.0142% ( 12) 00:06:54.379 9880.812 - 9931.225: 97.1071% ( 18) 00:06:54.379 9931.225 - 9981.637: 97.1896% ( 16) 00:06:54.379 9981.637 - 10032.049: 97.2669% ( 15) 00:06:54.379 10032.049 - 10082.462: 97.3391% ( 14) 00:06:54.379 10082.462 - 10132.874: 97.4113% ( 14) 00:06:54.379 10132.874 - 10183.286: 97.4783% ( 13) 00:06:54.379 10183.286 - 10233.698: 97.5557% ( 15) 00:06:54.379 10233.698 - 10284.111: 97.6279% ( 14) 00:06:54.379 10284.111 - 10334.523: 97.7001% ( 14) 00:06:54.379 10334.523 - 10384.935: 97.7671% ( 13) 00:06:54.379 10384.935 - 10435.348: 97.8238% ( 11) 00:06:54.379 10435.348 - 10485.760: 97.9012% ( 15) 00:06:54.379 10485.760 - 10536.172: 97.9785% ( 15) 00:06:54.379 10536.172 - 10586.585: 98.0662% ( 17) 00:06:54.379 10586.585 - 10636.997: 98.1333% ( 13) 00:06:54.379 10636.997 - 10687.409: 98.2209% ( 17) 00:06:54.379 10687.409 - 10737.822: 98.2776% ( 11) 00:06:54.379 10737.822 - 10788.234: 98.3292% ( 10) 00:06:54.379 10788.234 - 10838.646: 98.4014% ( 14) 00:06:54.379 10838.646 - 10889.058: 98.4633% ( 12) 00:06:54.379 10889.058 - 10939.471: 98.4891% ( 5) 00:06:54.379 10939.471 - 10989.883: 98.5355% ( 9) 00:06:54.379 10989.883 - 11040.295: 98.5716% ( 7) 00:06:54.379 11040.295 - 11090.708: 98.5922% ( 4) 00:06:54.379 11090.708 - 11141.120: 98.6180% ( 5) 00:06:54.379 11141.120 - 11191.532: 98.6541% ( 7) 00:06:54.379 11191.532 - 11241.945: 98.6850% ( 6) 00:06:54.379 11241.945 - 11292.357: 98.7160% ( 6) 00:06:54.379 11292.357 - 11342.769: 98.7572% ( 8) 00:06:54.379 11342.769 - 11393.182: 98.7882% ( 6) 00:06:54.379 11393.182 - 11443.594: 98.8191% ( 6) 00:06:54.379 11443.594 - 11494.006: 98.8552% ( 7) 00:06:54.379 11494.006 - 11544.418: 98.8861% ( 6) 00:06:54.379 11544.418 - 11594.831: 98.9222% ( 7) 00:06:54.379 11594.831 - 11645.243: 98.9583% ( 7) 00:06:54.379 11645.243 - 11695.655: 98.9686% ( 2) 00:06:54.379 11695.655 - 11746.068: 98.9944% ( 5) 00:06:54.379 11746.068 - 11796.480: 99.0047% ( 2) 00:06:54.379 11796.480 - 11846.892: 99.0357% ( 6) 00:06:54.379 11846.892 - 11897.305: 99.0512% ( 3) 00:06:54.379 11897.305 - 11947.717: 99.0666% ( 3) 00:06:54.379 11947.717 - 11998.129: 99.0873% ( 4) 00:06:54.379 11998.129 - 12048.542: 99.1027% ( 3) 00:06:54.379 12048.542 - 12098.954: 99.1233% ( 4) 00:06:54.379 12098.954 - 12149.366: 99.1388% ( 3) 00:06:54.379 12149.366 - 12199.778: 99.1543% ( 3) 00:06:54.379 12199.778 - 12250.191: 99.1801% ( 5) 00:06:54.379 12250.191 - 12300.603: 99.1904% ( 2) 00:06:54.379 12300.603 - 12351.015: 99.2059% ( 3) 00:06:54.379 12351.015 - 12401.428: 99.2213% ( 3) 00:06:54.379 12401.428 - 12451.840: 99.2420% ( 4) 00:06:54.379 12451.840 - 12502.252: 99.2523% ( 2) 00:06:54.379 12502.252 - 12552.665: 99.2574% ( 1) 00:06:54.379 12552.665 - 12603.077: 99.2677% ( 2) 00:06:54.379 12603.077 - 12653.489: 99.2781% ( 2) 00:06:54.379 12653.489 - 12703.902: 99.2884% ( 2) 00:06:54.379 12703.902 - 12754.314: 99.2935% ( 1) 00:06:54.379 12754.314 - 12804.726: 99.2987% ( 1) 00:06:54.379 12804.726 - 12855.138: 99.3142% ( 3) 00:06:54.379 12855.138 - 12905.551: 99.3245% ( 2) 00:06:54.379 12905.551 - 13006.375: 99.3399% ( 3) 00:06:54.379 26214.400 - 26416.049: 99.3451% ( 1) 00:06:54.379 26416.049 - 26617.698: 99.3812% ( 7) 00:06:54.379 26617.698 - 26819.348: 99.4173% ( 7) 00:06:54.379 26819.348 - 27020.997: 99.4585% ( 8) 00:06:54.379 27020.997 - 27222.646: 99.4998% ( 8) 00:06:54.379 27222.646 - 27424.295: 99.5410% ( 8) 00:06:54.379 27424.295 - 27625.945: 99.5875% ( 9) 00:06:54.379 27625.945 - 27827.594: 99.6236% ( 7) 00:06:54.379 27827.594 - 28029.243: 99.6700% ( 9) 00:06:54.379 30650.683 - 30852.332: 99.6803% ( 2) 00:06:54.379 30852.332 - 31053.982: 99.7164% ( 7) 00:06:54.379 31053.982 - 31255.631: 99.7576% ( 8) 00:06:54.379 31255.631 - 31457.280: 99.7989% ( 8) 00:06:54.379 31457.280 - 31658.929: 99.8453% ( 9) 00:06:54.379 31658.929 - 31860.578: 99.8866% ( 8) 00:06:54.379 31860.578 - 32062.228: 99.9278% ( 8) 00:06:54.379 32062.228 - 32263.877: 99.9691% ( 8) 00:06:54.379 32263.877 - 32465.526: 100.0000% ( 6) 00:06:54.379 00:06:54.379 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:06:54.379 ============================================================================== 00:06:54.379 Range in us Cumulative IO count 00:06:54.379 5570.560 - 5595.766: 0.0516% ( 10) 00:06:54.379 5595.766 - 5620.972: 0.2269% ( 34) 00:06:54.379 5620.972 - 5646.178: 0.5982% ( 72) 00:06:54.379 5646.178 - 5671.385: 1.0984% ( 97) 00:06:54.379 5671.385 - 5696.591: 1.7327% ( 123) 00:06:54.379 5696.591 - 5721.797: 2.5526% ( 159) 00:06:54.379 5721.797 - 5747.003: 3.7696% ( 236) 00:06:54.379 5747.003 - 5772.209: 5.1310% ( 264) 00:06:54.379 5772.209 - 5797.415: 6.7450% ( 313) 00:06:54.379 5797.415 - 5822.622: 8.5190% ( 344) 00:06:54.379 5822.622 - 5847.828: 10.4270% ( 370) 00:06:54.379 5847.828 - 5873.034: 12.4948% ( 401) 00:06:54.379 5873.034 - 5898.240: 14.7226% ( 432) 00:06:54.379 5898.240 - 5923.446: 17.0225% ( 446) 00:06:54.379 5923.446 - 5948.652: 19.3740% ( 456) 00:06:54.379 5948.652 - 5973.858: 21.7822% ( 467) 00:06:54.379 5973.858 - 5999.065: 24.2265% ( 474) 00:06:54.379 5999.065 - 6024.271: 26.7327% ( 486) 00:06:54.379 6024.271 - 6049.477: 29.1925% ( 477) 00:06:54.379 6049.477 - 6074.683: 31.7296% ( 492) 00:06:54.379 6074.683 - 6099.889: 34.2512% ( 489) 00:06:54.379 6099.889 - 6125.095: 36.8554% ( 505) 00:06:54.379 6125.095 - 6150.302: 39.3616% ( 486) 00:06:54.379 6150.302 - 6175.508: 41.8987% ( 492) 00:06:54.379 6175.508 - 6200.714: 44.4307% ( 491) 00:06:54.379 6200.714 - 6225.920: 46.9988% ( 498) 00:06:54.379 6225.920 - 6251.126: 49.5565% ( 496) 00:06:54.379 6251.126 - 6276.332: 52.1040% ( 494) 00:06:54.379 6276.332 - 6301.538: 54.6978% ( 503) 00:06:54.379 6301.538 - 6326.745: 57.2917% ( 503) 00:06:54.379 6326.745 - 6351.951: 59.9165% ( 509) 00:06:54.379 6351.951 - 6377.157: 62.5258% ( 506) 00:06:54.379 6377.157 - 6402.363: 65.1506% ( 509) 00:06:54.379 6402.363 - 6427.569: 67.7238% ( 499) 00:06:54.379 6427.569 - 6452.775: 70.1784% ( 476) 00:06:54.379 6452.775 - 6503.188: 74.7009% ( 877) 00:06:54.379 6503.188 - 6553.600: 78.6716% ( 770) 00:06:54.379 6553.600 - 6604.012: 81.8740% ( 621) 00:06:54.379 6604.012 - 6654.425: 84.2770% ( 466) 00:06:54.379 6654.425 - 6704.837: 85.9478% ( 324) 00:06:54.379 6704.837 - 6755.249: 87.0823% ( 220) 00:06:54.379 6755.249 - 6805.662: 87.9177% ( 162) 00:06:54.379 6805.662 - 6856.074: 88.5468% ( 122) 00:06:54.379 6856.074 - 6906.486: 89.0161% ( 91) 00:06:54.379 6906.486 - 6956.898: 89.3307% ( 61) 00:06:54.379 6956.898 - 7007.311: 89.6607% ( 64) 00:06:54.379 7007.311 - 7057.723: 89.9340% ( 53) 00:06:54.379 7057.723 - 7108.135: 90.2228% ( 56) 00:06:54.379 7108.135 - 7158.548: 90.4806% ( 50) 00:06:54.379 7158.548 - 7208.960: 90.6663% ( 36) 00:06:54.379 7208.960 - 7259.372: 90.8364% ( 33) 00:06:54.379 7259.372 - 7309.785: 91.0272% ( 37) 00:06:54.379 7309.785 - 7360.197: 91.1665% ( 27) 00:06:54.379 7360.197 - 7410.609: 91.2799% ( 22) 00:06:54.379 7410.609 - 7461.022: 91.4088% ( 25) 00:06:54.379 7461.022 - 7511.434: 91.5223% ( 22) 00:06:54.379 7511.434 - 7561.846: 91.6409% ( 23) 00:06:54.379 7561.846 - 7612.258: 91.7801% ( 27) 00:06:54.379 7612.258 - 7662.671: 91.8833% ( 20) 00:06:54.379 7662.671 - 7713.083: 91.9915% ( 21) 00:06:54.379 7713.083 - 7763.495: 92.1050% ( 22) 00:06:54.379 7763.495 - 7813.908: 92.2133% ( 21) 00:06:54.379 7813.908 - 7864.320: 92.3267% ( 22) 00:06:54.380 7864.320 - 7914.732: 92.4402% ( 22) 00:06:54.380 7914.732 - 7965.145: 92.5588% ( 23) 00:06:54.380 7965.145 - 8015.557: 92.6929% ( 26) 00:06:54.380 8015.557 - 8065.969: 92.8166% ( 24) 00:06:54.380 8065.969 - 8116.382: 92.9198% ( 20) 00:06:54.380 8116.382 - 8166.794: 93.0487% ( 25) 00:06:54.380 8166.794 - 8217.206: 93.1982% ( 29) 00:06:54.380 8217.206 - 8267.618: 93.3426% ( 28) 00:06:54.380 8267.618 - 8318.031: 93.4922% ( 29) 00:06:54.380 8318.031 - 8368.443: 93.6417% ( 29) 00:06:54.380 8368.443 - 8418.855: 93.7603% ( 23) 00:06:54.380 8418.855 - 8469.268: 93.8944% ( 26) 00:06:54.380 8469.268 - 8519.680: 94.0388% ( 28) 00:06:54.380 8519.680 - 8570.092: 94.1832% ( 28) 00:06:54.380 8570.092 - 8620.505: 94.3482% ( 32) 00:06:54.380 8620.505 - 8670.917: 94.4823% ( 26) 00:06:54.380 8670.917 - 8721.329: 94.6060% ( 24) 00:06:54.380 8721.329 - 8771.742: 94.7401% ( 26) 00:06:54.380 8771.742 - 8822.154: 94.8587% ( 23) 00:06:54.380 8822.154 - 8872.566: 94.9876% ( 25) 00:06:54.380 8872.566 - 8922.978: 95.1062% ( 23) 00:06:54.380 8922.978 - 8973.391: 95.2248% ( 23) 00:06:54.380 8973.391 - 9023.803: 95.3331% ( 21) 00:06:54.380 9023.803 - 9074.215: 95.4311% ( 19) 00:06:54.380 9074.215 - 9124.628: 95.5136% ( 16) 00:06:54.380 9124.628 - 9175.040: 95.6013% ( 17) 00:06:54.380 9175.040 - 9225.452: 95.7096% ( 21) 00:06:54.380 9225.452 - 9275.865: 95.8385% ( 25) 00:06:54.380 9275.865 - 9326.277: 95.9468% ( 21) 00:06:54.380 9326.277 - 9376.689: 96.0241% ( 15) 00:06:54.380 9376.689 - 9427.102: 96.0963% ( 14) 00:06:54.380 9427.102 - 9477.514: 96.1685% ( 14) 00:06:54.380 9477.514 - 9527.926: 96.2304% ( 12) 00:06:54.380 9527.926 - 9578.338: 96.2974% ( 13) 00:06:54.380 9578.338 - 9628.751: 96.3490% ( 10) 00:06:54.380 9628.751 - 9679.163: 96.4006% ( 10) 00:06:54.380 9679.163 - 9729.575: 96.4831% ( 16) 00:06:54.380 9729.575 - 9779.988: 96.5553% ( 14) 00:06:54.380 9779.988 - 9830.400: 96.6533% ( 19) 00:06:54.380 9830.400 - 9880.812: 96.7564% ( 20) 00:06:54.380 9880.812 - 9931.225: 96.8544% ( 19) 00:06:54.380 9931.225 - 9981.637: 96.9627% ( 21) 00:06:54.380 9981.637 - 10032.049: 97.1019% ( 27) 00:06:54.380 10032.049 - 10082.462: 97.1947% ( 18) 00:06:54.380 10082.462 - 10132.874: 97.2979% ( 20) 00:06:54.380 10132.874 - 10183.286: 97.4165% ( 23) 00:06:54.380 10183.286 - 10233.698: 97.5402% ( 24) 00:06:54.380 10233.698 - 10284.111: 97.6537% ( 22) 00:06:54.380 10284.111 - 10334.523: 97.7929% ( 27) 00:06:54.380 10334.523 - 10384.935: 97.9321% ( 27) 00:06:54.380 10384.935 - 10435.348: 98.0559% ( 24) 00:06:54.380 10435.348 - 10485.760: 98.1590% ( 20) 00:06:54.380 10485.760 - 10536.172: 98.2570% ( 19) 00:06:54.380 10536.172 - 10586.585: 98.3395% ( 16) 00:06:54.380 10586.585 - 10636.997: 98.4220% ( 16) 00:06:54.380 10636.997 - 10687.409: 98.4891% ( 13) 00:06:54.380 10687.409 - 10737.822: 98.5509% ( 12) 00:06:54.380 10737.822 - 10788.234: 98.6128% ( 12) 00:06:54.380 10788.234 - 10838.646: 98.6541% ( 8) 00:06:54.380 10838.646 - 10889.058: 98.6953% ( 8) 00:06:54.380 10889.058 - 10939.471: 98.7417% ( 9) 00:06:54.380 10939.471 - 10989.883: 98.7882% ( 9) 00:06:54.380 10989.883 - 11040.295: 98.8243% ( 7) 00:06:54.380 11040.295 - 11090.708: 98.8707% ( 9) 00:06:54.380 11090.708 - 11141.120: 98.9068% ( 7) 00:06:54.380 11141.120 - 11191.532: 98.9274% ( 4) 00:06:54.380 11191.532 - 11241.945: 98.9480% ( 4) 00:06:54.380 11241.945 - 11292.357: 98.9738% ( 5) 00:06:54.380 11292.357 - 11342.769: 98.9944% ( 4) 00:06:54.380 11342.769 - 11393.182: 99.0047% ( 2) 00:06:54.380 11393.182 - 11443.594: 99.0099% ( 1) 00:06:54.380 12149.366 - 12199.778: 99.0254% ( 3) 00:06:54.380 12199.778 - 12250.191: 99.0357% ( 2) 00:06:54.380 12250.191 - 12300.603: 99.0460% ( 2) 00:06:54.380 12300.603 - 12351.015: 99.0563% ( 2) 00:06:54.380 12351.015 - 12401.428: 99.0666% ( 2) 00:06:54.380 12401.428 - 12451.840: 99.0769% ( 2) 00:06:54.380 12451.840 - 12502.252: 99.0873% ( 2) 00:06:54.380 12502.252 - 12552.665: 99.0976% ( 2) 00:06:54.380 12552.665 - 12603.077: 99.1079% ( 2) 00:06:54.380 12653.489 - 12703.902: 99.1182% ( 2) 00:06:54.380 12703.902 - 12754.314: 99.1285% ( 2) 00:06:54.380 12754.314 - 12804.726: 99.1388% ( 2) 00:06:54.380 12804.726 - 12855.138: 99.1491% ( 2) 00:06:54.380 12855.138 - 12905.551: 99.1698% ( 4) 00:06:54.380 12905.551 - 13006.375: 99.2059% ( 7) 00:06:54.380 13006.375 - 13107.200: 99.2471% ( 8) 00:06:54.380 13107.200 - 13208.025: 99.2832% ( 7) 00:06:54.380 13208.025 - 13308.849: 99.3245% ( 8) 00:06:54.380 13308.849 - 13409.674: 99.3399% ( 3) 00:06:54.380 24702.031 - 24802.855: 99.3502% ( 2) 00:06:54.380 24802.855 - 24903.680: 99.3709% ( 4) 00:06:54.380 24903.680 - 25004.505: 99.3915% ( 4) 00:06:54.380 25004.505 - 25105.329: 99.4173% ( 5) 00:06:54.380 25105.329 - 25206.154: 99.4379% ( 4) 00:06:54.380 25206.154 - 25306.978: 99.4534% ( 3) 00:06:54.380 25306.978 - 25407.803: 99.4792% ( 5) 00:06:54.380 25407.803 - 25508.628: 99.4998% ( 4) 00:06:54.380 25508.628 - 25609.452: 99.5256% ( 5) 00:06:54.380 25609.452 - 25710.277: 99.5462% ( 4) 00:06:54.380 25710.277 - 25811.102: 99.5668% ( 4) 00:06:54.380 25811.102 - 26012.751: 99.6132% ( 9) 00:06:54.380 26012.751 - 26214.400: 99.6545% ( 8) 00:06:54.380 26214.400 - 26416.049: 99.6700% ( 3) 00:06:54.380 29037.489 - 29239.138: 99.6906% ( 4) 00:06:54.380 29239.138 - 29440.788: 99.7318% ( 8) 00:06:54.380 29440.788 - 29642.437: 99.7731% ( 8) 00:06:54.380 29642.437 - 29844.086: 99.8195% ( 9) 00:06:54.380 29844.086 - 30045.735: 99.8659% ( 9) 00:06:54.380 30045.735 - 30247.385: 99.9072% ( 8) 00:06:54.380 30247.385 - 30449.034: 99.9484% ( 8) 00:06:54.380 30449.034 - 30650.683: 99.9948% ( 9) 00:06:54.380 30650.683 - 30852.332: 100.0000% ( 1) 00:06:54.380 00:06:54.380 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:06:54.380 ============================================================================== 00:06:54.380 Range in us Cumulative IO count 00:06:54.380 5545.354 - 5570.560: 0.0206% ( 4) 00:06:54.380 5570.560 - 5595.766: 0.0980% ( 15) 00:06:54.380 5595.766 - 5620.972: 0.2321% ( 26) 00:06:54.380 5620.972 - 5646.178: 0.5982% ( 71) 00:06:54.380 5646.178 - 5671.385: 1.0365% ( 85) 00:06:54.380 5671.385 - 5696.591: 1.7636% ( 141) 00:06:54.380 5696.591 - 5721.797: 2.5887% ( 160) 00:06:54.380 5721.797 - 5747.003: 3.6149% ( 199) 00:06:54.380 5747.003 - 5772.209: 5.0743% ( 283) 00:06:54.380 5772.209 - 5797.415: 6.7605% ( 327) 00:06:54.380 5797.415 - 5822.622: 8.5551% ( 348) 00:06:54.380 5822.622 - 5847.828: 10.5714% ( 391) 00:06:54.380 5847.828 - 5873.034: 12.7424% ( 421) 00:06:54.380 5873.034 - 5898.240: 14.8051% ( 400) 00:06:54.380 5898.240 - 5923.446: 17.0586% ( 437) 00:06:54.380 5923.446 - 5948.652: 19.3998% ( 454) 00:06:54.380 5948.652 - 5973.858: 21.7461% ( 455) 00:06:54.380 5973.858 - 5999.065: 24.2574% ( 487) 00:06:54.380 5999.065 - 6024.271: 26.7275% ( 479) 00:06:54.380 6024.271 - 6049.477: 29.2079% ( 481) 00:06:54.380 6049.477 - 6074.683: 31.7708% ( 497) 00:06:54.380 6074.683 - 6099.889: 34.3131% ( 493) 00:06:54.380 6099.889 - 6125.095: 36.8967% ( 501) 00:06:54.380 6125.095 - 6150.302: 39.4905% ( 503) 00:06:54.380 6150.302 - 6175.508: 42.0276% ( 492) 00:06:54.380 6175.508 - 6200.714: 44.6318% ( 505) 00:06:54.380 6200.714 - 6225.920: 47.1999% ( 498) 00:06:54.380 6225.920 - 6251.126: 49.7370% ( 492) 00:06:54.380 6251.126 - 6276.332: 52.3566% ( 508) 00:06:54.380 6276.332 - 6301.538: 54.9299% ( 499) 00:06:54.380 6301.538 - 6326.745: 57.5031% ( 499) 00:06:54.380 6326.745 - 6351.951: 60.0763% ( 499) 00:06:54.380 6351.951 - 6377.157: 62.6702% ( 503) 00:06:54.380 6377.157 - 6402.363: 65.2382% ( 498) 00:06:54.380 6402.363 - 6427.569: 67.7599% ( 489) 00:06:54.380 6427.569 - 6452.775: 70.2919% ( 491) 00:06:54.380 6452.775 - 6503.188: 74.7989% ( 874) 00:06:54.380 6503.188 - 6553.600: 78.7902% ( 774) 00:06:54.380 6553.600 - 6604.012: 82.0132% ( 625) 00:06:54.380 6604.012 - 6654.425: 84.3183% ( 447) 00:06:54.380 6654.425 - 6704.837: 85.9942% ( 325) 00:06:54.380 6704.837 - 6755.249: 87.1184% ( 218) 00:06:54.380 6755.249 - 6805.662: 88.0879% ( 188) 00:06:54.380 6805.662 - 6856.074: 88.7531% ( 129) 00:06:54.380 6856.074 - 6906.486: 89.2842% ( 103) 00:06:54.380 6906.486 - 6956.898: 89.6916% ( 79) 00:06:54.380 6956.898 - 7007.311: 90.0371% ( 67) 00:06:54.380 7007.311 - 7057.723: 90.2847% ( 48) 00:06:54.380 7057.723 - 7108.135: 90.5219% ( 46) 00:06:54.380 7108.135 - 7158.548: 90.7075% ( 36) 00:06:54.380 7158.548 - 7208.960: 90.8519% ( 28) 00:06:54.380 7208.960 - 7259.372: 90.9653% ( 22) 00:06:54.380 7259.372 - 7309.785: 91.0530% ( 17) 00:06:54.380 7309.785 - 7360.197: 91.1304% ( 15) 00:06:54.380 7360.197 - 7410.609: 91.1974% ( 13) 00:06:54.380 7410.609 - 7461.022: 91.2593% ( 12) 00:06:54.380 7461.022 - 7511.434: 91.3366% ( 15) 00:06:54.380 7511.434 - 7561.846: 91.4088% ( 14) 00:06:54.380 7561.846 - 7612.258: 91.4810% ( 14) 00:06:54.380 7612.258 - 7662.671: 91.5790% ( 19) 00:06:54.380 7662.671 - 7713.083: 91.6821% ( 20) 00:06:54.380 7713.083 - 7763.495: 91.7956% ( 22) 00:06:54.380 7763.495 - 7813.908: 91.9090% ( 22) 00:06:54.380 7813.908 - 7864.320: 92.0895% ( 35) 00:06:54.380 7864.320 - 7914.732: 92.2339% ( 28) 00:06:54.380 7914.732 - 7965.145: 92.3680% ( 26) 00:06:54.380 7965.145 - 8015.557: 92.4866% ( 23) 00:06:54.380 8015.557 - 8065.969: 92.6722% ( 36) 00:06:54.380 8065.969 - 8116.382: 92.8476% ( 34) 00:06:54.380 8116.382 - 8166.794: 93.0384% ( 37) 00:06:54.380 8166.794 - 8217.206: 93.2034% ( 32) 00:06:54.380 8217.206 - 8267.618: 93.4148% ( 41) 00:06:54.380 8267.618 - 8318.031: 93.5644% ( 29) 00:06:54.380 8318.031 - 8368.443: 93.7500% ( 36) 00:06:54.380 8368.443 - 8418.855: 93.8944% ( 28) 00:06:54.380 8418.855 - 8469.268: 94.0388% ( 28) 00:06:54.380 8469.268 - 8519.680: 94.1986% ( 31) 00:06:54.380 8519.680 - 8570.092: 94.3430% ( 28) 00:06:54.380 8570.092 - 8620.505: 94.4874% ( 28) 00:06:54.381 8620.505 - 8670.917: 94.6112% ( 24) 00:06:54.381 8670.917 - 8721.329: 94.7246% ( 22) 00:06:54.381 8721.329 - 8771.742: 94.8587% ( 26) 00:06:54.381 8771.742 - 8822.154: 94.9876% ( 25) 00:06:54.381 8822.154 - 8872.566: 95.0959% ( 21) 00:06:54.381 8872.566 - 8922.978: 95.1939% ( 19) 00:06:54.381 8922.978 - 8973.391: 95.2764% ( 16) 00:06:54.381 8973.391 - 9023.803: 95.3795% ( 20) 00:06:54.381 9023.803 - 9074.215: 95.4827% ( 20) 00:06:54.381 9074.215 - 9124.628: 95.5600% ( 15) 00:06:54.381 9124.628 - 9175.040: 95.6322% ( 14) 00:06:54.381 9175.040 - 9225.452: 95.7354% ( 20) 00:06:54.381 9225.452 - 9275.865: 95.8230% ( 17) 00:06:54.381 9275.865 - 9326.277: 95.8901% ( 13) 00:06:54.381 9326.277 - 9376.689: 95.9880% ( 19) 00:06:54.381 9376.689 - 9427.102: 96.0963% ( 21) 00:06:54.381 9427.102 - 9477.514: 96.1995% ( 20) 00:06:54.381 9477.514 - 9527.926: 96.2871% ( 17) 00:06:54.381 9527.926 - 9578.338: 96.3748% ( 17) 00:06:54.381 9578.338 - 9628.751: 96.4625% ( 17) 00:06:54.381 9628.751 - 9679.163: 96.5501% ( 17) 00:06:54.381 9679.163 - 9729.575: 96.6429% ( 18) 00:06:54.381 9729.575 - 9779.988: 96.7203% ( 15) 00:06:54.381 9779.988 - 9830.400: 96.7925% ( 14) 00:06:54.381 9830.400 - 9880.812: 96.9059% ( 22) 00:06:54.381 9880.812 - 9931.225: 97.0194% ( 22) 00:06:54.381 9931.225 - 9981.637: 97.1174% ( 19) 00:06:54.381 9981.637 - 10032.049: 97.2205% ( 20) 00:06:54.381 10032.049 - 10082.462: 97.3185% ( 19) 00:06:54.381 10082.462 - 10132.874: 97.4216% ( 20) 00:06:54.381 10132.874 - 10183.286: 97.5299% ( 21) 00:06:54.381 10183.286 - 10233.698: 97.6073% ( 15) 00:06:54.381 10233.698 - 10284.111: 97.6949% ( 17) 00:06:54.381 10284.111 - 10334.523: 97.7981% ( 20) 00:06:54.381 10334.523 - 10384.935: 97.8703% ( 14) 00:06:54.381 10384.935 - 10435.348: 97.9734% ( 20) 00:06:54.381 10435.348 - 10485.760: 98.0404% ( 13) 00:06:54.381 10485.760 - 10536.172: 98.1178% ( 15) 00:06:54.381 10536.172 - 10586.585: 98.1900% ( 14) 00:06:54.381 10586.585 - 10636.997: 98.2570% ( 13) 00:06:54.381 10636.997 - 10687.409: 98.3189% ( 12) 00:06:54.381 10687.409 - 10737.822: 98.3808% ( 12) 00:06:54.381 10737.822 - 10788.234: 98.4633% ( 16) 00:06:54.381 10788.234 - 10838.646: 98.5252% ( 12) 00:06:54.381 10838.646 - 10889.058: 98.5767% ( 10) 00:06:54.381 10889.058 - 10939.471: 98.6180% ( 8) 00:06:54.381 10939.471 - 10989.883: 98.6541% ( 7) 00:06:54.381 10989.883 - 11040.295: 98.6850% ( 6) 00:06:54.381 11040.295 - 11090.708: 98.7160% ( 6) 00:06:54.381 11090.708 - 11141.120: 98.7417% ( 5) 00:06:54.381 11141.120 - 11191.532: 98.7778% ( 7) 00:06:54.381 11191.532 - 11241.945: 98.8139% ( 7) 00:06:54.381 11241.945 - 11292.357: 98.8500% ( 7) 00:06:54.381 11292.357 - 11342.769: 98.8758% ( 5) 00:06:54.381 11342.769 - 11393.182: 98.8913% ( 3) 00:06:54.381 11393.182 - 11443.594: 98.9119% ( 4) 00:06:54.381 11443.594 - 11494.006: 98.9325% ( 4) 00:06:54.381 11494.006 - 11544.418: 98.9480% ( 3) 00:06:54.381 11544.418 - 11594.831: 98.9686% ( 4) 00:06:54.381 11594.831 - 11645.243: 98.9841% ( 3) 00:06:54.381 11645.243 - 11695.655: 99.0047% ( 4) 00:06:54.381 11695.655 - 11746.068: 99.0099% ( 1) 00:06:54.381 12451.840 - 12502.252: 99.0357% ( 5) 00:06:54.381 12502.252 - 12552.665: 99.0460% ( 2) 00:06:54.381 12552.665 - 12603.077: 99.0666% ( 4) 00:06:54.381 12603.077 - 12653.489: 99.0873% ( 4) 00:06:54.381 12653.489 - 12703.902: 99.1079% ( 4) 00:06:54.381 12703.902 - 12754.314: 99.1285% ( 4) 00:06:54.381 12754.314 - 12804.726: 99.1440% ( 3) 00:06:54.381 12804.726 - 12855.138: 99.1646% ( 4) 00:06:54.381 12855.138 - 12905.551: 99.1852% ( 4) 00:06:54.381 12905.551 - 13006.375: 99.2213% ( 7) 00:06:54.381 13006.375 - 13107.200: 99.2574% ( 7) 00:06:54.381 13107.200 - 13208.025: 99.2987% ( 8) 00:06:54.381 13208.025 - 13308.849: 99.3348% ( 7) 00:06:54.381 13308.849 - 13409.674: 99.3399% ( 1) 00:06:54.381 23290.486 - 23391.311: 99.3606% ( 4) 00:06:54.381 23391.311 - 23492.135: 99.3812% ( 4) 00:06:54.381 23492.135 - 23592.960: 99.4018% ( 4) 00:06:54.381 23592.960 - 23693.785: 99.4224% ( 4) 00:06:54.381 23693.785 - 23794.609: 99.4379% ( 3) 00:06:54.381 23794.609 - 23895.434: 99.4585% ( 4) 00:06:54.381 23895.434 - 23996.258: 99.4843% ( 5) 00:06:54.381 23996.258 - 24097.083: 99.5050% ( 4) 00:06:54.381 24097.083 - 24197.908: 99.5256% ( 4) 00:06:54.381 24197.908 - 24298.732: 99.5410% ( 3) 00:06:54.381 24298.732 - 24399.557: 99.5617% ( 4) 00:06:54.381 24399.557 - 24500.382: 99.5823% ( 4) 00:06:54.381 24500.382 - 24601.206: 99.6029% ( 4) 00:06:54.381 24601.206 - 24702.031: 99.6184% ( 3) 00:06:54.381 24702.031 - 24802.855: 99.6339% ( 3) 00:06:54.381 24802.855 - 24903.680: 99.6597% ( 5) 00:06:54.381 24903.680 - 25004.505: 99.6700% ( 2) 00:06:54.381 27625.945 - 27827.594: 99.6906% ( 4) 00:06:54.381 27827.594 - 28029.243: 99.7370% ( 9) 00:06:54.381 28029.243 - 28230.892: 99.7783% ( 8) 00:06:54.381 28230.892 - 28432.542: 99.8195% ( 8) 00:06:54.381 28432.542 - 28634.191: 99.8608% ( 8) 00:06:54.381 28634.191 - 28835.840: 99.9072% ( 9) 00:06:54.381 28835.840 - 29037.489: 99.9484% ( 8) 00:06:54.381 29037.489 - 29239.138: 99.9948% ( 9) 00:06:54.381 29239.138 - 29440.788: 100.0000% ( 1) 00:06:54.381 00:06:54.381 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:06:54.381 ============================================================================== 00:06:54.381 Range in us Cumulative IO count 00:06:54.381 5545.354 - 5570.560: 0.0155% ( 3) 00:06:54.381 5570.560 - 5595.766: 0.0877% ( 14) 00:06:54.381 5595.766 - 5620.972: 0.2578% ( 33) 00:06:54.381 5620.972 - 5646.178: 0.5724% ( 61) 00:06:54.381 5646.178 - 5671.385: 1.1345% ( 109) 00:06:54.381 5671.385 - 5696.591: 1.8255% ( 134) 00:06:54.381 5696.591 - 5721.797: 2.5835% ( 147) 00:06:54.381 5721.797 - 5747.003: 3.6561% ( 208) 00:06:54.381 5747.003 - 5772.209: 4.8783% ( 237) 00:06:54.381 5772.209 - 5797.415: 6.4717% ( 309) 00:06:54.381 5797.415 - 5822.622: 8.3075% ( 356) 00:06:54.381 5822.622 - 5847.828: 10.3187% ( 390) 00:06:54.381 5847.828 - 5873.034: 12.4587% ( 415) 00:06:54.381 5873.034 - 5898.240: 14.5936% ( 414) 00:06:54.381 5898.240 - 5923.446: 16.8626% ( 440) 00:06:54.381 5923.446 - 5948.652: 19.0233% ( 419) 00:06:54.381 5948.652 - 5973.858: 21.3748% ( 456) 00:06:54.381 5973.858 - 5999.065: 23.8036% ( 471) 00:06:54.381 5999.065 - 6024.271: 26.2118% ( 467) 00:06:54.381 6024.271 - 6049.477: 28.7593% ( 494) 00:06:54.381 6049.477 - 6074.683: 31.3480% ( 502) 00:06:54.381 6074.683 - 6099.889: 33.9160% ( 498) 00:06:54.381 6099.889 - 6125.095: 36.4480% ( 491) 00:06:54.381 6125.095 - 6150.302: 38.9800% ( 491) 00:06:54.381 6150.302 - 6175.508: 41.5584% ( 500) 00:06:54.381 6175.508 - 6200.714: 44.1780% ( 508) 00:06:54.381 6200.714 - 6225.920: 46.6945% ( 488) 00:06:54.381 6225.920 - 6251.126: 49.3193% ( 509) 00:06:54.381 6251.126 - 6276.332: 51.8822% ( 497) 00:06:54.381 6276.332 - 6301.538: 54.5225% ( 512) 00:06:54.381 6301.538 - 6326.745: 57.1421% ( 508) 00:06:54.381 6326.745 - 6351.951: 59.7566% ( 507) 00:06:54.381 6351.951 - 6377.157: 62.3659% ( 506) 00:06:54.381 6377.157 - 6402.363: 64.9701% ( 505) 00:06:54.381 6402.363 - 6427.569: 67.6104% ( 512) 00:06:54.381 6427.569 - 6452.775: 70.0701% ( 477) 00:06:54.381 6452.775 - 6503.188: 74.6751% ( 893) 00:06:54.381 6503.188 - 6553.600: 78.6304% ( 767) 00:06:54.381 6553.600 - 6604.012: 81.9616% ( 646) 00:06:54.381 6604.012 - 6654.425: 84.3853% ( 470) 00:06:54.381 6654.425 - 6704.837: 85.9788% ( 309) 00:06:54.381 6704.837 - 6755.249: 87.1029% ( 218) 00:06:54.381 6755.249 - 6805.662: 88.0208% ( 178) 00:06:54.381 6805.662 - 6856.074: 88.7015% ( 132) 00:06:54.381 6856.074 - 6906.486: 89.1914% ( 95) 00:06:54.381 6906.486 - 6956.898: 89.5730% ( 74) 00:06:54.381 6956.898 - 7007.311: 89.8566% ( 55) 00:06:54.381 7007.311 - 7057.723: 90.1042% ( 48) 00:06:54.381 7057.723 - 7108.135: 90.3053% ( 39) 00:06:54.381 7108.135 - 7158.548: 90.4755% ( 33) 00:06:54.381 7158.548 - 7208.960: 90.5992% ( 24) 00:06:54.381 7208.960 - 7259.372: 90.7178% ( 23) 00:06:54.381 7259.372 - 7309.785: 90.8210% ( 20) 00:06:54.381 7309.785 - 7360.197: 90.9138% ( 18) 00:06:54.381 7360.197 - 7410.609: 91.0066% ( 18) 00:06:54.381 7410.609 - 7461.022: 91.1200% ( 22) 00:06:54.381 7461.022 - 7511.434: 91.2335% ( 22) 00:06:54.381 7511.434 - 7561.846: 91.3315% ( 19) 00:06:54.381 7561.846 - 7612.258: 91.4140% ( 16) 00:06:54.381 7612.258 - 7662.671: 91.4965% ( 16) 00:06:54.381 7662.671 - 7713.083: 91.5842% ( 17) 00:06:54.381 7713.083 - 7763.495: 91.6770% ( 18) 00:06:54.381 7763.495 - 7813.908: 91.8007% ( 24) 00:06:54.381 7813.908 - 7864.320: 91.9193% ( 23) 00:06:54.381 7864.320 - 7914.732: 92.0431% ( 24) 00:06:54.381 7914.732 - 7965.145: 92.1669% ( 24) 00:06:54.381 7965.145 - 8015.557: 92.3164% ( 29) 00:06:54.381 8015.557 - 8065.969: 92.4711% ( 30) 00:06:54.381 8065.969 - 8116.382: 92.6207% ( 29) 00:06:54.381 8116.382 - 8166.794: 92.7857% ( 32) 00:06:54.381 8166.794 - 8217.206: 92.9816% ( 38) 00:06:54.381 8217.206 - 8267.618: 93.1363% ( 30) 00:06:54.381 8267.618 - 8318.031: 93.3168% ( 35) 00:06:54.381 8318.031 - 8368.443: 93.5025% ( 36) 00:06:54.381 8368.443 - 8418.855: 93.6778% ( 34) 00:06:54.381 8418.855 - 8469.268: 93.8428% ( 32) 00:06:54.381 8469.268 - 8519.680: 94.0336% ( 37) 00:06:54.381 8519.680 - 8570.092: 94.2141% ( 35) 00:06:54.381 8570.092 - 8620.505: 94.3740% ( 31) 00:06:54.381 8620.505 - 8670.917: 94.5338% ( 31) 00:06:54.381 8670.917 - 8721.329: 94.6988% ( 32) 00:06:54.381 8721.329 - 8771.742: 94.8690% ( 33) 00:06:54.381 8771.742 - 8822.154: 95.0340% ( 32) 00:06:54.381 8822.154 - 8872.566: 95.1784% ( 28) 00:06:54.381 8872.566 - 8922.978: 95.3589% ( 35) 00:06:54.381 8922.978 - 8973.391: 95.5136% ( 30) 00:06:54.381 8973.391 - 9023.803: 95.6425% ( 25) 00:06:54.381 9023.803 - 9074.215: 95.7611% ( 23) 00:06:54.381 9074.215 - 9124.628: 95.8694% ( 21) 00:06:54.381 9124.628 - 9175.040: 95.9623% ( 18) 00:06:54.381 9175.040 - 9225.452: 96.0654% ( 20) 00:06:54.381 9225.452 - 9275.865: 96.1582% ( 18) 00:06:54.382 9275.865 - 9326.277: 96.2665% ( 21) 00:06:54.382 9326.277 - 9376.689: 96.3490% ( 16) 00:06:54.382 9376.689 - 9427.102: 96.4315% ( 16) 00:06:54.382 9427.102 - 9477.514: 96.5192% ( 17) 00:06:54.382 9477.514 - 9527.926: 96.6068% ( 17) 00:06:54.382 9527.926 - 9578.338: 96.7203% ( 22) 00:06:54.382 9578.338 - 9628.751: 96.8183% ( 19) 00:06:54.382 9628.751 - 9679.163: 96.9059% ( 17) 00:06:54.382 9679.163 - 9729.575: 96.9884% ( 16) 00:06:54.382 9729.575 - 9779.988: 97.0864% ( 19) 00:06:54.382 9779.988 - 9830.400: 97.1638% ( 15) 00:06:54.382 9830.400 - 9880.812: 97.2463% ( 16) 00:06:54.382 9880.812 - 9931.225: 97.3236% ( 15) 00:06:54.382 9931.225 - 9981.637: 97.3907% ( 13) 00:06:54.382 9981.637 - 10032.049: 97.4783% ( 17) 00:06:54.382 10032.049 - 10082.462: 97.5557% ( 15) 00:06:54.382 10082.462 - 10132.874: 97.6330% ( 15) 00:06:54.382 10132.874 - 10183.286: 97.7259% ( 18) 00:06:54.382 10183.286 - 10233.698: 97.8238% ( 19) 00:06:54.382 10233.698 - 10284.111: 97.9115% ( 17) 00:06:54.382 10284.111 - 10334.523: 97.9940% ( 16) 00:06:54.382 10334.523 - 10384.935: 98.0456% ( 10) 00:06:54.382 10384.935 - 10435.348: 98.1075% ( 12) 00:06:54.382 10435.348 - 10485.760: 98.1642% ( 11) 00:06:54.382 10485.760 - 10536.172: 98.2003% ( 7) 00:06:54.382 10536.172 - 10586.585: 98.2364% ( 7) 00:06:54.382 10586.585 - 10636.997: 98.2725% ( 7) 00:06:54.382 10636.997 - 10687.409: 98.3137% ( 8) 00:06:54.382 10687.409 - 10737.822: 98.3292% ( 3) 00:06:54.382 10737.822 - 10788.234: 98.3447% ( 3) 00:06:54.382 10788.234 - 10838.646: 98.3859% ( 8) 00:06:54.382 10838.646 - 10889.058: 98.4323% ( 9) 00:06:54.382 10889.058 - 10939.471: 98.4684% ( 7) 00:06:54.382 10939.471 - 10989.883: 98.5200% ( 10) 00:06:54.382 10989.883 - 11040.295: 98.5716% ( 10) 00:06:54.382 11040.295 - 11090.708: 98.6180% ( 9) 00:06:54.382 11090.708 - 11141.120: 98.6644% ( 9) 00:06:54.382 11141.120 - 11191.532: 98.7108% ( 9) 00:06:54.382 11191.532 - 11241.945: 98.7624% ( 10) 00:06:54.382 11241.945 - 11292.357: 98.8088% ( 9) 00:06:54.382 11292.357 - 11342.769: 98.8500% ( 8) 00:06:54.382 11342.769 - 11393.182: 98.8861% ( 7) 00:06:54.382 11393.182 - 11443.594: 98.9119% ( 5) 00:06:54.382 11443.594 - 11494.006: 98.9377% ( 5) 00:06:54.382 11494.006 - 11544.418: 98.9635% ( 5) 00:06:54.382 11544.418 - 11594.831: 98.9893% ( 5) 00:06:54.382 11594.831 - 11645.243: 99.0099% ( 4) 00:06:54.382 12451.840 - 12502.252: 99.0357% ( 5) 00:06:54.382 12502.252 - 12552.665: 99.0512% ( 3) 00:06:54.382 12552.665 - 12603.077: 99.0718% ( 4) 00:06:54.382 12603.077 - 12653.489: 99.0924% ( 4) 00:06:54.382 12653.489 - 12703.902: 99.1130% ( 4) 00:06:54.382 12703.902 - 12754.314: 99.1337% ( 4) 00:06:54.382 12754.314 - 12804.726: 99.1543% ( 4) 00:06:54.382 12804.726 - 12855.138: 99.1698% ( 3) 00:06:54.382 12855.138 - 12905.551: 99.1904% ( 4) 00:06:54.382 12905.551 - 13006.375: 99.2265% ( 7) 00:06:54.382 13006.375 - 13107.200: 99.2677% ( 8) 00:06:54.382 13107.200 - 13208.025: 99.3038% ( 7) 00:06:54.382 13208.025 - 13308.849: 99.3399% ( 7) 00:06:54.382 21475.643 - 21576.468: 99.3451% ( 1) 00:06:54.382 21576.468 - 21677.292: 99.3709% ( 5) 00:06:54.382 21677.292 - 21778.117: 99.3915% ( 4) 00:06:54.382 21778.117 - 21878.942: 99.4121% ( 4) 00:06:54.382 21878.942 - 21979.766: 99.4328% ( 4) 00:06:54.382 21979.766 - 22080.591: 99.4534% ( 4) 00:06:54.382 22080.591 - 22181.415: 99.4792% ( 5) 00:06:54.382 22181.415 - 22282.240: 99.4998% ( 4) 00:06:54.382 22282.240 - 22383.065: 99.5204% ( 4) 00:06:54.382 22383.065 - 22483.889: 99.5410% ( 4) 00:06:54.382 22483.889 - 22584.714: 99.5668% ( 5) 00:06:54.382 22584.714 - 22685.538: 99.5875% ( 4) 00:06:54.382 22685.538 - 22786.363: 99.6081% ( 4) 00:06:54.382 22786.363 - 22887.188: 99.6287% ( 4) 00:06:54.382 22887.188 - 22988.012: 99.6493% ( 4) 00:06:54.382 22988.012 - 23088.837: 99.6700% ( 4) 00:06:54.382 25811.102 - 26012.751: 99.7061% ( 7) 00:06:54.382 26012.751 - 26214.400: 99.7525% ( 9) 00:06:54.382 26214.400 - 26416.049: 99.7937% ( 8) 00:06:54.382 26416.049 - 26617.698: 99.8350% ( 8) 00:06:54.382 26617.698 - 26819.348: 99.8814% ( 9) 00:06:54.382 26819.348 - 27020.997: 99.9226% ( 8) 00:06:54.382 27020.997 - 27222.646: 99.9639% ( 8) 00:06:54.382 27222.646 - 27424.295: 100.0000% ( 7) 00:06:54.382 00:06:54.382 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:06:54.382 ============================================================================== 00:06:54.382 Range in us Cumulative IO count 00:06:54.382 5520.148 - 5545.354: 0.0052% ( 1) 00:06:54.382 5545.354 - 5570.560: 0.0258% ( 4) 00:06:54.382 5570.560 - 5595.766: 0.1392% ( 22) 00:06:54.382 5595.766 - 5620.972: 0.2991% ( 31) 00:06:54.382 5620.972 - 5646.178: 0.6291% ( 64) 00:06:54.382 5646.178 - 5671.385: 1.1448% ( 100) 00:06:54.382 5671.385 - 5696.591: 1.7327% ( 114) 00:06:54.382 5696.591 - 5721.797: 2.5423% ( 157) 00:06:54.382 5721.797 - 5747.003: 3.6252% ( 210) 00:06:54.382 5747.003 - 5772.209: 4.8422% ( 236) 00:06:54.382 5772.209 - 5797.415: 6.4150% ( 305) 00:06:54.382 5797.415 - 5822.622: 8.2818% ( 362) 00:06:54.382 5822.622 - 5847.828: 10.2104% ( 374) 00:06:54.382 5847.828 - 5873.034: 12.4948% ( 443) 00:06:54.382 5873.034 - 5898.240: 14.6194% ( 412) 00:06:54.382 5898.240 - 5923.446: 16.8472% ( 432) 00:06:54.382 5923.446 - 5948.652: 19.2090% ( 458) 00:06:54.382 5948.652 - 5973.858: 21.5862% ( 461) 00:06:54.382 5973.858 - 5999.065: 24.0666% ( 481) 00:06:54.382 5999.065 - 6024.271: 26.5367% ( 479) 00:06:54.382 6024.271 - 6049.477: 29.1357% ( 504) 00:06:54.382 6049.477 - 6074.683: 31.5852% ( 475) 00:06:54.382 6074.683 - 6099.889: 34.0656% ( 481) 00:06:54.382 6099.889 - 6125.095: 36.6027% ( 492) 00:06:54.382 6125.095 - 6150.302: 39.1863% ( 501) 00:06:54.382 6150.302 - 6175.508: 41.7182% ( 491) 00:06:54.382 6175.508 - 6200.714: 44.2760% ( 496) 00:06:54.382 6200.714 - 6225.920: 46.8698% ( 503) 00:06:54.382 6225.920 - 6251.126: 49.4895% ( 508) 00:06:54.382 6251.126 - 6276.332: 52.0885% ( 504) 00:06:54.382 6276.332 - 6301.538: 54.6411% ( 495) 00:06:54.382 6301.538 - 6326.745: 57.2710% ( 510) 00:06:54.382 6326.745 - 6351.951: 59.8546% ( 501) 00:06:54.382 6351.951 - 6377.157: 62.4742% ( 508) 00:06:54.382 6377.157 - 6402.363: 65.0681% ( 503) 00:06:54.382 6402.363 - 6427.569: 67.5743% ( 486) 00:06:54.382 6427.569 - 6452.775: 70.0186% ( 474) 00:06:54.382 6452.775 - 6503.188: 74.6029% ( 889) 00:06:54.382 6503.188 - 6553.600: 78.5479% ( 765) 00:06:54.382 6553.600 - 6604.012: 81.8327% ( 637) 00:06:54.382 6604.012 - 6654.425: 84.2873% ( 476) 00:06:54.382 6654.425 - 6704.837: 85.8653% ( 306) 00:06:54.382 6704.837 - 6755.249: 86.9740% ( 215) 00:06:54.382 6755.249 - 6805.662: 87.8042% ( 161) 00:06:54.382 6805.662 - 6856.074: 88.3973% ( 115) 00:06:54.382 6856.074 - 6906.486: 88.7943% ( 77) 00:06:54.382 6906.486 - 6956.898: 89.0934% ( 58) 00:06:54.382 6956.898 - 7007.311: 89.3255% ( 45) 00:06:54.382 7007.311 - 7057.723: 89.5266% ( 39) 00:06:54.382 7057.723 - 7108.135: 89.7071% ( 35) 00:06:54.382 7108.135 - 7158.548: 89.9082% ( 39) 00:06:54.382 7158.548 - 7208.960: 90.1042% ( 38) 00:06:54.382 7208.960 - 7259.372: 90.2847% ( 35) 00:06:54.382 7259.372 - 7309.785: 90.4497% ( 32) 00:06:54.382 7309.785 - 7360.197: 90.6044% ( 30) 00:06:54.382 7360.197 - 7410.609: 90.7591% ( 30) 00:06:54.382 7410.609 - 7461.022: 90.9035% ( 28) 00:06:54.382 7461.022 - 7511.434: 91.0736% ( 33) 00:06:54.382 7511.434 - 7561.846: 91.2490% ( 34) 00:06:54.382 7561.846 - 7612.258: 91.3934% ( 28) 00:06:54.382 7612.258 - 7662.671: 91.5377% ( 28) 00:06:54.382 7662.671 - 7713.083: 91.6718% ( 26) 00:06:54.382 7713.083 - 7763.495: 91.8214% ( 29) 00:06:54.382 7763.495 - 7813.908: 91.9503% ( 25) 00:06:54.382 7813.908 - 7864.320: 92.0947% ( 28) 00:06:54.382 7864.320 - 7914.732: 92.2184% ( 24) 00:06:54.382 7914.732 - 7965.145: 92.3370% ( 23) 00:06:54.382 7965.145 - 8015.557: 92.4453% ( 21) 00:06:54.382 8015.557 - 8065.969: 92.5227% ( 15) 00:06:54.382 8065.969 - 8116.382: 92.6052% ( 16) 00:06:54.382 8116.382 - 8166.794: 92.6722% ( 13) 00:06:54.382 8166.794 - 8217.206: 92.8063% ( 26) 00:06:54.382 8217.206 - 8267.618: 92.9765% ( 33) 00:06:54.382 8267.618 - 8318.031: 93.1002% ( 24) 00:06:54.382 8318.031 - 8368.443: 93.2446% ( 28) 00:06:54.382 8368.443 - 8418.855: 93.4045% ( 31) 00:06:54.383 8418.855 - 8469.268: 93.6056% ( 39) 00:06:54.383 8469.268 - 8519.680: 93.7861% ( 35) 00:06:54.383 8519.680 - 8570.092: 94.0130% ( 44) 00:06:54.383 8570.092 - 8620.505: 94.2141% ( 39) 00:06:54.383 8620.505 - 8670.917: 94.4101% ( 38) 00:06:54.383 8670.917 - 8721.329: 94.6009% ( 37) 00:06:54.383 8721.329 - 8771.742: 94.8020% ( 39) 00:06:54.383 8771.742 - 8822.154: 94.9722% ( 33) 00:06:54.383 8822.154 - 8872.566: 95.1681% ( 38) 00:06:54.383 8872.566 - 8922.978: 95.3434% ( 34) 00:06:54.383 8922.978 - 8973.391: 95.5239% ( 35) 00:06:54.383 8973.391 - 9023.803: 95.6993% ( 34) 00:06:54.383 9023.803 - 9074.215: 95.8488% ( 29) 00:06:54.383 9074.215 - 9124.628: 95.9932% ( 28) 00:06:54.383 9124.628 - 9175.040: 96.1479% ( 30) 00:06:54.383 9175.040 - 9225.452: 96.2717% ( 24) 00:06:54.383 9225.452 - 9275.865: 96.4109% ( 27) 00:06:54.383 9275.865 - 9326.277: 96.5398% ( 25) 00:06:54.383 9326.277 - 9376.689: 96.6326% ( 18) 00:06:54.383 9376.689 - 9427.102: 96.7255% ( 18) 00:06:54.383 9427.102 - 9477.514: 96.8080% ( 16) 00:06:54.383 9477.514 - 9527.926: 96.8750% ( 13) 00:06:54.383 9527.926 - 9578.338: 96.9369% ( 12) 00:06:54.383 9578.338 - 9628.751: 96.9936% ( 11) 00:06:54.383 9628.751 - 9679.163: 97.0555% ( 12) 00:06:54.383 9679.163 - 9729.575: 97.1225% ( 13) 00:06:54.383 9729.575 - 9779.988: 97.1844% ( 12) 00:06:54.383 9779.988 - 9830.400: 97.2463% ( 12) 00:06:54.383 9830.400 - 9880.812: 97.3340% ( 17) 00:06:54.383 9880.812 - 9931.225: 97.4165% ( 16) 00:06:54.383 9931.225 - 9981.637: 97.4835% ( 13) 00:06:54.383 9981.637 - 10032.049: 97.5712% ( 17) 00:06:54.383 10032.049 - 10082.462: 97.6434% ( 14) 00:06:54.383 10082.462 - 10132.874: 97.7052% ( 12) 00:06:54.383 10132.874 - 10183.286: 97.7826% ( 15) 00:06:54.383 10183.286 - 10233.698: 97.8187% ( 7) 00:06:54.383 10233.698 - 10284.111: 97.8599% ( 8) 00:06:54.383 10284.111 - 10334.523: 97.8960% ( 7) 00:06:54.383 10334.523 - 10384.935: 97.9373% ( 8) 00:06:54.383 10384.935 - 10435.348: 97.9734% ( 7) 00:06:54.383 10435.348 - 10485.760: 98.0095% ( 7) 00:06:54.383 10485.760 - 10536.172: 98.0559% ( 9) 00:06:54.383 10536.172 - 10586.585: 98.1075% ( 10) 00:06:54.383 10586.585 - 10636.997: 98.1693% ( 12) 00:06:54.383 10636.997 - 10687.409: 98.2209% ( 10) 00:06:54.383 10687.409 - 10737.822: 98.2673% ( 9) 00:06:54.383 10737.822 - 10788.234: 98.3292% ( 12) 00:06:54.383 10788.234 - 10838.646: 98.3859% ( 11) 00:06:54.383 10838.646 - 10889.058: 98.4427% ( 11) 00:06:54.383 10889.058 - 10939.471: 98.5097% ( 13) 00:06:54.383 10939.471 - 10989.883: 98.5664% ( 11) 00:06:54.383 10989.883 - 11040.295: 98.6180% ( 10) 00:06:54.383 11040.295 - 11090.708: 98.6747% ( 11) 00:06:54.383 11090.708 - 11141.120: 98.7366% ( 12) 00:06:54.383 11141.120 - 11191.532: 98.7830% ( 9) 00:06:54.383 11191.532 - 11241.945: 98.8243% ( 8) 00:06:54.383 11241.945 - 11292.357: 98.8655% ( 8) 00:06:54.383 11292.357 - 11342.769: 98.8965% ( 6) 00:06:54.383 11342.769 - 11393.182: 98.9119% ( 3) 00:06:54.383 11393.182 - 11443.594: 98.9325% ( 4) 00:06:54.383 11443.594 - 11494.006: 98.9532% ( 4) 00:06:54.383 11494.006 - 11544.418: 98.9686% ( 3) 00:06:54.383 11544.418 - 11594.831: 98.9893% ( 4) 00:06:54.383 11594.831 - 11645.243: 99.0099% ( 4) 00:06:54.383 12401.428 - 12451.840: 99.0202% ( 2) 00:06:54.383 12451.840 - 12502.252: 99.0357% ( 3) 00:06:54.383 12502.252 - 12552.665: 99.0563% ( 4) 00:06:54.383 12552.665 - 12603.077: 99.0769% ( 4) 00:06:54.383 12603.077 - 12653.489: 99.0976% ( 4) 00:06:54.383 12653.489 - 12703.902: 99.1182% ( 4) 00:06:54.383 12703.902 - 12754.314: 99.1388% ( 4) 00:06:54.383 12754.314 - 12804.726: 99.1543% ( 3) 00:06:54.383 12804.726 - 12855.138: 99.1749% ( 4) 00:06:54.383 12855.138 - 12905.551: 99.1955% ( 4) 00:06:54.383 12905.551 - 13006.375: 99.2368% ( 8) 00:06:54.383 13006.375 - 13107.200: 99.2729% ( 7) 00:06:54.383 13107.200 - 13208.025: 99.3142% ( 8) 00:06:54.383 13208.025 - 13308.849: 99.3399% ( 5) 00:06:54.383 19761.625 - 19862.449: 99.3554% ( 3) 00:06:54.383 19862.449 - 19963.274: 99.3760% ( 4) 00:06:54.383 19963.274 - 20064.098: 99.3967% ( 4) 00:06:54.383 20064.098 - 20164.923: 99.4173% ( 4) 00:06:54.383 20164.923 - 20265.748: 99.4379% ( 4) 00:06:54.383 20265.748 - 20366.572: 99.4585% ( 4) 00:06:54.383 20366.572 - 20467.397: 99.4792% ( 4) 00:06:54.383 20467.397 - 20568.222: 99.5050% ( 5) 00:06:54.383 20568.222 - 20669.046: 99.5256% ( 4) 00:06:54.383 20669.046 - 20769.871: 99.5462% ( 4) 00:06:54.383 20769.871 - 20870.695: 99.5668% ( 4) 00:06:54.383 20870.695 - 20971.520: 99.5875% ( 4) 00:06:54.383 20971.520 - 21072.345: 99.6132% ( 5) 00:06:54.383 21072.345 - 21173.169: 99.6339% ( 4) 00:06:54.383 21173.169 - 21273.994: 99.6545% ( 4) 00:06:54.383 21273.994 - 21374.818: 99.6700% ( 3) 00:06:54.383 23996.258 - 24097.083: 99.6803% ( 2) 00:06:54.383 24097.083 - 24197.908: 99.7009% ( 4) 00:06:54.383 24197.908 - 24298.732: 99.7215% ( 4) 00:06:54.383 24298.732 - 24399.557: 99.7422% ( 4) 00:06:54.383 24399.557 - 24500.382: 99.7628% ( 4) 00:06:54.383 24500.382 - 24601.206: 99.7834% ( 4) 00:06:54.383 24601.206 - 24702.031: 99.8092% ( 5) 00:06:54.383 24702.031 - 24802.855: 99.8298% ( 4) 00:06:54.383 24802.855 - 24903.680: 99.8505% ( 4) 00:06:54.383 24903.680 - 25004.505: 99.8711% ( 4) 00:06:54.383 25004.505 - 25105.329: 99.8917% ( 4) 00:06:54.383 25105.329 - 25206.154: 99.9175% ( 5) 00:06:54.383 25206.154 - 25306.978: 99.9381% ( 4) 00:06:54.383 25306.978 - 25407.803: 99.9587% ( 4) 00:06:54.383 25407.803 - 25508.628: 99.9794% ( 4) 00:06:54.383 25508.628 - 25609.452: 100.0000% ( 4) 00:06:54.383 00:06:54.383 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:06:54.383 ============================================================================== 00:06:54.383 Range in us Cumulative IO count 00:06:54.383 5545.354 - 5570.560: 0.0051% ( 1) 00:06:54.383 5570.560 - 5595.766: 0.0977% ( 18) 00:06:54.383 5595.766 - 5620.972: 0.2159% ( 23) 00:06:54.383 5620.972 - 5646.178: 0.5859% ( 72) 00:06:54.383 5646.178 - 5671.385: 1.0691% ( 94) 00:06:54.383 5671.385 - 5696.591: 1.6910% ( 121) 00:06:54.383 5696.591 - 5721.797: 2.5442% ( 166) 00:06:54.383 5721.797 - 5747.003: 3.5413% ( 194) 00:06:54.383 5747.003 - 5772.209: 4.7749% ( 240) 00:06:54.383 5772.209 - 5797.415: 6.5378% ( 343) 00:06:54.383 5797.415 - 5822.622: 8.3676% ( 356) 00:06:54.383 5822.622 - 5847.828: 10.3156% ( 379) 00:06:54.383 5847.828 - 5873.034: 12.3818% ( 402) 00:06:54.383 5873.034 - 5898.240: 14.5816% ( 428) 00:06:54.383 5898.240 - 5923.446: 16.8740% ( 446) 00:06:54.383 5923.446 - 5948.652: 19.2280% ( 458) 00:06:54.383 5948.652 - 5973.858: 21.6643% ( 474) 00:06:54.383 5973.858 - 5999.065: 24.0389% ( 462) 00:06:54.383 5999.065 - 6024.271: 26.5008% ( 479) 00:06:54.383 6024.271 - 6049.477: 29.0347% ( 493) 00:06:54.383 6049.477 - 6074.683: 31.5892% ( 497) 00:06:54.383 6074.683 - 6099.889: 34.1488% ( 498) 00:06:54.383 6099.889 - 6125.095: 36.6776% ( 492) 00:06:54.383 6125.095 - 6150.302: 39.2321% ( 497) 00:06:54.383 6150.302 - 6175.508: 41.8020% ( 500) 00:06:54.383 6175.508 - 6200.714: 44.4233% ( 510) 00:06:54.383 6200.714 - 6225.920: 47.0446% ( 510) 00:06:54.383 6225.920 - 6251.126: 49.5426% ( 486) 00:06:54.383 6251.126 - 6276.332: 52.0611% ( 490) 00:06:54.383 6276.332 - 6301.538: 54.7081% ( 515) 00:06:54.383 6301.538 - 6326.745: 57.2728% ( 499) 00:06:54.383 6326.745 - 6351.951: 59.8581% ( 503) 00:06:54.383 6351.951 - 6377.157: 62.4537% ( 505) 00:06:54.383 6377.157 - 6402.363: 64.9928% ( 494) 00:06:54.383 6402.363 - 6427.569: 67.5678% ( 501) 00:06:54.383 6427.569 - 6452.775: 70.0298% ( 479) 00:06:54.383 6452.775 - 6503.188: 74.5528% ( 880) 00:06:54.383 6503.188 - 6553.600: 78.5362% ( 775) 00:06:54.383 6553.600 - 6604.012: 81.8205% ( 639) 00:06:54.383 6604.012 - 6654.425: 84.1180% ( 447) 00:06:54.383 6654.425 - 6704.837: 85.6908% ( 306) 00:06:54.383 6704.837 - 6755.249: 86.7496% ( 206) 00:06:54.383 6755.249 - 6805.662: 87.4846% ( 143) 00:06:54.383 6805.662 - 6856.074: 88.1014% ( 120) 00:06:54.383 6856.074 - 6906.486: 88.5742% ( 92) 00:06:54.383 6906.486 - 6956.898: 88.9289% ( 69) 00:06:54.383 6956.898 - 7007.311: 89.1859% ( 50) 00:06:54.383 7007.311 - 7057.723: 89.4223% ( 46) 00:06:54.383 7057.723 - 7108.135: 89.6382% ( 42) 00:06:54.383 7108.135 - 7158.548: 89.8643% ( 44) 00:06:54.383 7158.548 - 7208.960: 90.1007% ( 46) 00:06:54.383 7208.960 - 7259.372: 90.3012% ( 39) 00:06:54.383 7259.372 - 7309.785: 90.4914% ( 37) 00:06:54.383 7309.785 - 7360.197: 90.6610% ( 33) 00:06:54.383 7360.197 - 7410.609: 90.8409% ( 35) 00:06:54.383 7410.609 - 7461.022: 91.0105% ( 33) 00:06:54.383 7461.022 - 7511.434: 91.1904% ( 35) 00:06:54.383 7511.434 - 7561.846: 91.3446% ( 30) 00:06:54.383 7561.846 - 7612.258: 91.4731% ( 25) 00:06:54.383 7612.258 - 7662.671: 91.6170% ( 28) 00:06:54.383 7662.671 - 7713.083: 91.7506% ( 26) 00:06:54.383 7713.083 - 7763.495: 91.9048% ( 30) 00:06:54.383 7763.495 - 7813.908: 92.0436% ( 27) 00:06:54.383 7813.908 - 7864.320: 92.1618% ( 23) 00:06:54.383 7864.320 - 7914.732: 92.2800% ( 23) 00:06:54.383 7914.732 - 7965.145: 92.3880% ( 21) 00:06:54.383 7965.145 - 8015.557: 92.4959% ( 21) 00:06:54.383 8015.557 - 8065.969: 92.5935% ( 19) 00:06:54.383 8065.969 - 8116.382: 92.6963% ( 20) 00:06:54.383 8116.382 - 8166.794: 92.8043% ( 21) 00:06:54.383 8166.794 - 8217.206: 92.9174% ( 22) 00:06:54.383 8217.206 - 8267.618: 93.0561% ( 27) 00:06:54.383 8267.618 - 8318.031: 93.1795% ( 24) 00:06:54.383 8318.031 - 8368.443: 93.3183% ( 27) 00:06:54.383 8368.443 - 8418.855: 93.4570% ( 27) 00:06:54.383 8418.855 - 8469.268: 93.6164% ( 31) 00:06:54.383 8469.268 - 8519.680: 93.8117% ( 38) 00:06:54.383 8519.680 - 8570.092: 93.9710% ( 31) 00:06:54.383 8570.092 - 8620.505: 94.1406% ( 33) 00:06:54.383 8620.505 - 8670.917: 94.3102% ( 33) 00:06:54.383 8670.917 - 8721.329: 94.4799% ( 33) 00:06:54.383 8721.329 - 8771.742: 94.6443% ( 32) 00:06:54.383 8771.742 - 8822.154: 94.8139% ( 33) 00:06:54.383 8822.154 - 8872.566: 94.9887% ( 34) 00:06:54.384 8872.566 - 8922.978: 95.1634% ( 34) 00:06:54.384 8922.978 - 8973.391: 95.3536% ( 37) 00:06:54.384 8973.391 - 9023.803: 95.5387% ( 36) 00:06:54.384 9023.803 - 9074.215: 95.7185% ( 35) 00:06:54.384 9074.215 - 9124.628: 95.8676% ( 29) 00:06:54.384 9124.628 - 9175.040: 96.0115% ( 28) 00:06:54.384 9175.040 - 9225.452: 96.1451% ( 26) 00:06:54.384 9225.452 - 9275.865: 96.2531% ( 21) 00:06:54.384 9275.865 - 9326.277: 96.3610% ( 21) 00:06:54.384 9326.277 - 9376.689: 96.4638% ( 20) 00:06:54.384 9376.689 - 9427.102: 96.5718% ( 21) 00:06:54.384 9427.102 - 9477.514: 96.6745% ( 20) 00:06:54.384 9477.514 - 9527.926: 96.7825% ( 21) 00:06:54.384 9527.926 - 9578.338: 96.8956% ( 22) 00:06:54.384 9578.338 - 9628.751: 96.9881% ( 18) 00:06:54.384 9628.751 - 9679.163: 97.0703% ( 16) 00:06:54.384 9679.163 - 9729.575: 97.1474% ( 15) 00:06:54.384 9729.575 - 9779.988: 97.2194% ( 14) 00:06:54.384 9779.988 - 9830.400: 97.2810% ( 12) 00:06:54.384 9830.400 - 9880.812: 97.3581% ( 15) 00:06:54.384 9880.812 - 9931.225: 97.4044% ( 9) 00:06:54.384 9931.225 - 9981.637: 97.4507% ( 9) 00:06:54.384 9981.637 - 10032.049: 97.4918% ( 8) 00:06:54.384 10032.049 - 10082.462: 97.5380% ( 9) 00:06:54.384 10082.462 - 10132.874: 97.5792% ( 8) 00:06:54.384 10132.874 - 10183.286: 97.6306% ( 10) 00:06:54.384 10183.286 - 10233.698: 97.6717% ( 8) 00:06:54.384 10233.698 - 10284.111: 97.7179% ( 9) 00:06:54.384 10284.111 - 10334.523: 97.7642% ( 9) 00:06:54.384 10334.523 - 10384.935: 97.8053% ( 8) 00:06:54.384 10384.935 - 10435.348: 97.8516% ( 9) 00:06:54.384 10435.348 - 10485.760: 97.9235% ( 14) 00:06:54.384 10485.760 - 10536.172: 97.9955% ( 14) 00:06:54.384 10536.172 - 10586.585: 98.0520% ( 11) 00:06:54.384 10586.585 - 10636.997: 98.1137% ( 12) 00:06:54.384 10636.997 - 10687.409: 98.1702% ( 11) 00:06:54.384 10687.409 - 10737.822: 98.2062% ( 7) 00:06:54.384 10737.822 - 10788.234: 98.2525% ( 9) 00:06:54.384 10788.234 - 10838.646: 98.2987% ( 9) 00:06:54.384 10838.646 - 10889.058: 98.3501% ( 10) 00:06:54.384 10889.058 - 10939.471: 98.4015% ( 10) 00:06:54.384 10939.471 - 10989.883: 98.4529% ( 10) 00:06:54.384 10989.883 - 11040.295: 98.4992% ( 9) 00:06:54.384 11040.295 - 11090.708: 98.5454% ( 9) 00:06:54.384 11090.708 - 11141.120: 98.6020% ( 11) 00:06:54.384 11141.120 - 11191.532: 98.6534% ( 10) 00:06:54.384 11191.532 - 11241.945: 98.7048% ( 10) 00:06:54.384 11241.945 - 11292.357: 98.7459% ( 8) 00:06:54.384 11292.357 - 11342.769: 98.7716% ( 5) 00:06:54.384 11342.769 - 11393.182: 98.8024% ( 6) 00:06:54.384 11393.182 - 11443.594: 98.8281% ( 5) 00:06:54.384 11443.594 - 11494.006: 98.8590% ( 6) 00:06:54.384 11494.006 - 11544.418: 98.8847% ( 5) 00:06:54.384 11544.418 - 11594.831: 98.9206% ( 7) 00:06:54.384 11594.831 - 11645.243: 98.9566% ( 7) 00:06:54.384 11645.243 - 11695.655: 98.9720% ( 3) 00:06:54.384 11695.655 - 11746.068: 98.9926% ( 4) 00:06:54.384 11746.068 - 11796.480: 99.0132% ( 4) 00:06:54.384 11796.480 - 11846.892: 99.0389% ( 5) 00:06:54.384 11846.892 - 11897.305: 99.0594% ( 4) 00:06:54.384 11897.305 - 11947.717: 99.0800% ( 4) 00:06:54.384 11947.717 - 11998.129: 99.1005% ( 4) 00:06:54.384 11998.129 - 12048.542: 99.1160% ( 3) 00:06:54.384 12048.542 - 12098.954: 99.1262% ( 2) 00:06:54.384 12098.954 - 12149.366: 99.1365% ( 2) 00:06:54.384 12149.366 - 12199.778: 99.1468% ( 2) 00:06:54.384 12199.778 - 12250.191: 99.1571% ( 2) 00:06:54.384 12250.191 - 12300.603: 99.1674% ( 2) 00:06:54.384 12300.603 - 12351.015: 99.1776% ( 2) 00:06:54.384 12351.015 - 12401.428: 99.1879% ( 2) 00:06:54.384 12401.428 - 12451.840: 99.1982% ( 2) 00:06:54.384 12451.840 - 12502.252: 99.2085% ( 2) 00:06:54.384 12502.252 - 12552.665: 99.2239% ( 3) 00:06:54.384 12552.665 - 12603.077: 99.2342% ( 2) 00:06:54.384 12603.077 - 12653.489: 99.2444% ( 2) 00:06:54.384 12653.489 - 12703.902: 99.2547% ( 2) 00:06:54.384 12703.902 - 12754.314: 99.2650% ( 2) 00:06:54.384 12754.314 - 12804.726: 99.2753% ( 2) 00:06:54.384 12804.726 - 12855.138: 99.2856% ( 2) 00:06:54.384 12855.138 - 12905.551: 99.2907% ( 1) 00:06:54.384 12905.551 - 13006.375: 99.3113% ( 4) 00:06:54.384 13006.375 - 13107.200: 99.3318% ( 4) 00:06:54.384 13107.200 - 13208.025: 99.3421% ( 2) 00:06:54.384 14720.394 - 14821.218: 99.3575% ( 3) 00:06:54.384 14821.218 - 14922.043: 99.3781% ( 4) 00:06:54.384 14922.043 - 15022.868: 99.3986% ( 4) 00:06:54.384 15022.868 - 15123.692: 99.4243% ( 5) 00:06:54.384 15123.692 - 15224.517: 99.4449% ( 4) 00:06:54.384 15224.517 - 15325.342: 99.4655% ( 4) 00:06:54.384 15325.342 - 15426.166: 99.4860% ( 4) 00:06:54.384 15426.166 - 15526.991: 99.5066% ( 4) 00:06:54.384 15526.991 - 15627.815: 99.5271% ( 4) 00:06:54.384 15627.815 - 15728.640: 99.5528% ( 5) 00:06:54.384 15728.640 - 15829.465: 99.5734% ( 4) 00:06:54.384 15829.465 - 15930.289: 99.5940% ( 4) 00:06:54.384 15930.289 - 16031.114: 99.6197% ( 5) 00:06:54.384 16031.114 - 16131.938: 99.6402% ( 4) 00:06:54.384 16131.938 - 16232.763: 99.6608% ( 4) 00:06:54.384 16232.763 - 16333.588: 99.6711% ( 2) 00:06:54.384 19156.677 - 19257.502: 99.6916% ( 4) 00:06:54.384 19257.502 - 19358.326: 99.7173% ( 5) 00:06:54.384 19358.326 - 19459.151: 99.7379% ( 4) 00:06:54.384 19459.151 - 19559.975: 99.7584% ( 4) 00:06:54.384 19559.975 - 19660.800: 99.7790% ( 4) 00:06:54.384 19660.800 - 19761.625: 99.7995% ( 4) 00:06:54.384 19761.625 - 19862.449: 99.8201% ( 4) 00:06:54.384 19862.449 - 19963.274: 99.8458% ( 5) 00:06:54.384 19963.274 - 20064.098: 99.8664% ( 4) 00:06:54.384 20064.098 - 20164.923: 99.8869% ( 4) 00:06:54.384 20164.923 - 20265.748: 99.9075% ( 4) 00:06:54.384 20265.748 - 20366.572: 99.9332% ( 5) 00:06:54.384 20366.572 - 20467.397: 99.9537% ( 4) 00:06:54.384 20467.397 - 20568.222: 99.9743% ( 4) 00:06:54.384 20568.222 - 20669.046: 100.0000% ( 5) 00:06:54.384 00:06:54.384 17:10:37 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:06:55.321 Initializing NVMe Controllers 00:06:55.321 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:06:55.321 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:06:55.321 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:06:55.321 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:06:55.321 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:06:55.321 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:06:55.321 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:06:55.321 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:06:55.321 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:06:55.321 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:06:55.321 Initialization complete. Launching workers. 00:06:55.321 ======================================================== 00:06:55.321 Latency(us) 00:06:55.321 Device Information : IOPS MiB/s Average min max 00:06:55.321 PCIE (0000:00:10.0) NSID 1 from core 0: 17910.22 209.89 7155.89 5863.33 30775.81 00:06:55.321 PCIE (0000:00:11.0) NSID 1 from core 0: 17910.22 209.89 7144.97 5951.42 28936.28 00:06:55.321 PCIE (0000:00:13.0) NSID 1 from core 0: 17910.22 209.89 7134.08 5830.26 27305.94 00:06:55.321 PCIE (0000:00:12.0) NSID 1 from core 0: 17910.22 209.89 7123.02 5869.73 25631.42 00:06:55.321 PCIE (0000:00:12.0) NSID 2 from core 0: 17910.22 209.89 7111.94 5827.76 23972.30 00:06:55.321 PCIE (0000:00:12.0) NSID 3 from core 0: 17974.19 210.63 7075.52 5909.15 18824.20 00:06:55.321 ======================================================== 00:06:55.321 Total : 107525.29 1260.06 7124.21 5827.76 30775.81 00:06:55.321 00:06:55.321 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:06:55.321 ================================================================================= 00:06:55.321 1.00000% : 6125.095us 00:06:55.321 10.00000% : 6452.775us 00:06:55.321 25.00000% : 6604.012us 00:06:55.321 50.00000% : 6856.074us 00:06:55.321 75.00000% : 7259.372us 00:06:55.321 90.00000% : 7914.732us 00:06:55.321 95.00000% : 8570.092us 00:06:55.321 98.00000% : 9275.865us 00:06:55.321 99.00000% : 9931.225us 00:06:55.321 99.50000% : 25710.277us 00:06:55.321 99.90000% : 30449.034us 00:06:55.321 99.99000% : 30852.332us 00:06:55.321 99.99900% : 30852.332us 00:06:55.321 99.99990% : 30852.332us 00:06:55.321 99.99999% : 30852.332us 00:06:55.321 00:06:55.321 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:06:55.321 ================================================================================= 00:06:55.321 1.00000% : 6225.920us 00:06:55.321 10.00000% : 6553.600us 00:06:55.321 25.00000% : 6704.837us 00:06:55.321 50.00000% : 6856.074us 00:06:55.321 75.00000% : 7158.548us 00:06:55.321 90.00000% : 7914.732us 00:06:55.321 95.00000% : 8620.505us 00:06:55.321 98.00000% : 9124.628us 00:06:55.321 99.00000% : 9679.163us 00:06:55.321 99.50000% : 23996.258us 00:06:55.321 99.90000% : 28634.191us 00:06:55.321 99.99000% : 29037.489us 00:06:55.321 99.99900% : 29037.489us 00:06:55.321 99.99990% : 29037.489us 00:06:55.321 99.99999% : 29037.489us 00:06:55.321 00:06:55.321 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:06:55.321 ================================================================================= 00:06:55.321 1.00000% : 6200.714us 00:06:55.321 10.00000% : 6553.600us 00:06:55.321 25.00000% : 6704.837us 00:06:55.321 50.00000% : 6856.074us 00:06:55.322 75.00000% : 7158.548us 00:06:55.322 90.00000% : 7864.320us 00:06:55.322 95.00000% : 8570.092us 00:06:55.322 98.00000% : 9376.689us 00:06:55.322 99.00000% : 10183.286us 00:06:55.322 99.50000% : 22383.065us 00:06:55.322 99.90000% : 27020.997us 00:06:55.322 99.99000% : 27424.295us 00:06:55.322 99.99900% : 27424.295us 00:06:55.322 99.99990% : 27424.295us 00:06:55.322 99.99999% : 27424.295us 00:06:55.322 00:06:55.322 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:06:55.322 ================================================================================= 00:06:55.322 1.00000% : 6251.126us 00:06:55.322 10.00000% : 6553.600us 00:06:55.322 25.00000% : 6704.837us 00:06:55.322 50.00000% : 6856.074us 00:06:55.322 75.00000% : 7158.548us 00:06:55.322 90.00000% : 7864.320us 00:06:55.322 95.00000% : 8519.680us 00:06:55.322 98.00000% : 9326.277us 00:06:55.322 99.00000% : 10082.462us 00:06:55.322 99.50000% : 20669.046us 00:06:55.322 99.90000% : 25306.978us 00:06:55.322 99.99000% : 25609.452us 00:06:55.322 99.99900% : 25710.277us 00:06:55.322 99.99990% : 25710.277us 00:06:55.322 99.99999% : 25710.277us 00:06:55.322 00:06:55.322 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:06:55.322 ================================================================================= 00:06:55.322 1.00000% : 6251.126us 00:06:55.322 10.00000% : 6553.600us 00:06:55.322 25.00000% : 6704.837us 00:06:55.322 50.00000% : 6856.074us 00:06:55.322 75.00000% : 7108.135us 00:06:55.322 90.00000% : 7864.320us 00:06:55.322 95.00000% : 8570.092us 00:06:55.322 98.00000% : 9477.514us 00:06:55.322 99.00000% : 9981.637us 00:06:55.322 99.50000% : 18854.203us 00:06:55.322 99.90000% : 23592.960us 00:06:55.322 99.99000% : 23996.258us 00:06:55.322 99.99900% : 23996.258us 00:06:55.322 99.99990% : 23996.258us 00:06:55.322 99.99999% : 23996.258us 00:06:55.322 00:06:55.322 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:06:55.322 ================================================================================= 00:06:55.322 1.00000% : 6225.920us 00:06:55.322 10.00000% : 6553.600us 00:06:55.322 25.00000% : 6704.837us 00:06:55.322 50.00000% : 6856.074us 00:06:55.322 75.00000% : 7108.135us 00:06:55.322 90.00000% : 7965.145us 00:06:55.322 95.00000% : 8570.092us 00:06:55.322 98.00000% : 9275.865us 00:06:55.322 99.00000% : 9779.988us 00:06:55.322 99.50000% : 13913.797us 00:06:55.322 99.90000% : 18450.905us 00:06:55.322 99.99000% : 18854.203us 00:06:55.322 99.99900% : 18854.203us 00:06:55.322 99.99990% : 18854.203us 00:06:55.322 99.99999% : 18854.203us 00:06:55.322 00:06:55.322 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:06:55.322 ============================================================================== 00:06:55.322 Range in us Cumulative IO count 00:06:55.322 5847.828 - 5873.034: 0.0335% ( 6) 00:06:55.322 5873.034 - 5898.240: 0.0837% ( 9) 00:06:55.322 5898.240 - 5923.446: 0.1395% ( 10) 00:06:55.322 5923.446 - 5948.652: 0.2065% ( 12) 00:06:55.322 5948.652 - 5973.858: 0.3571% ( 27) 00:06:55.322 5973.858 - 5999.065: 0.5022% ( 26) 00:06:55.322 5999.065 - 6024.271: 0.5636% ( 11) 00:06:55.322 6024.271 - 6049.477: 0.6808% ( 21) 00:06:55.322 6049.477 - 6074.683: 0.8705% ( 34) 00:06:55.322 6074.683 - 6099.889: 0.9821% ( 20) 00:06:55.322 6099.889 - 6125.095: 1.1886% ( 37) 00:06:55.322 6125.095 - 6150.302: 1.3449% ( 28) 00:06:55.322 6150.302 - 6175.508: 1.5569% ( 38) 00:06:55.322 6175.508 - 6200.714: 1.7634% ( 37) 00:06:55.322 6200.714 - 6225.920: 1.9978% ( 42) 00:06:55.322 6225.920 - 6251.126: 2.2377% ( 43) 00:06:55.322 6251.126 - 6276.332: 2.6004% ( 65) 00:06:55.322 6276.332 - 6301.538: 3.0580% ( 82) 00:06:55.322 6301.538 - 6326.745: 3.6998% ( 115) 00:06:55.322 6326.745 - 6351.951: 4.4420% ( 133) 00:06:55.322 6351.951 - 6377.157: 5.4911% ( 188) 00:06:55.322 6377.157 - 6402.363: 7.0647% ( 282) 00:06:55.322 6402.363 - 6427.569: 9.5145% ( 439) 00:06:55.322 6427.569 - 6452.775: 11.8304% ( 415) 00:06:55.322 6452.775 - 6503.188: 16.8638% ( 902) 00:06:55.322 6503.188 - 6553.600: 21.9754% ( 916) 00:06:55.322 6553.600 - 6604.012: 27.3717% ( 967) 00:06:55.322 6604.012 - 6654.425: 31.8248% ( 798) 00:06:55.322 6654.425 - 6704.837: 36.4565% ( 830) 00:06:55.322 6704.837 - 6755.249: 41.1272% ( 837) 00:06:55.322 6755.249 - 6805.662: 45.9821% ( 870) 00:06:55.322 6805.662 - 6856.074: 50.6362% ( 834) 00:06:55.322 6856.074 - 6906.486: 54.7210% ( 732) 00:06:55.322 6906.486 - 6956.898: 58.7109% ( 715) 00:06:55.322 6956.898 - 7007.311: 62.0424% ( 597) 00:06:55.322 7007.311 - 7057.723: 65.5413% ( 627) 00:06:55.322 7057.723 - 7108.135: 68.6719% ( 561) 00:06:55.322 7108.135 - 7158.548: 71.7522% ( 552) 00:06:55.322 7158.548 - 7208.960: 74.6763% ( 524) 00:06:55.322 7208.960 - 7259.372: 77.1931% ( 451) 00:06:55.322 7259.372 - 7309.785: 79.3527% ( 387) 00:06:55.322 7309.785 - 7360.197: 81.1496% ( 322) 00:06:55.322 7360.197 - 7410.609: 82.8237% ( 300) 00:06:55.322 7410.609 - 7461.022: 83.9342% ( 199) 00:06:55.322 7461.022 - 7511.434: 84.7712% ( 150) 00:06:55.322 7511.434 - 7561.846: 85.5134% ( 133) 00:06:55.322 7561.846 - 7612.258: 86.2444% ( 131) 00:06:55.322 7612.258 - 7662.671: 87.0592% ( 146) 00:06:55.322 7662.671 - 7713.083: 87.8795% ( 147) 00:06:55.322 7713.083 - 7763.495: 88.5770% ( 125) 00:06:55.322 7763.495 - 7813.908: 89.1964% ( 111) 00:06:55.322 7813.908 - 7864.320: 89.8884% ( 124) 00:06:55.322 7864.320 - 7914.732: 90.5190% ( 113) 00:06:55.322 7914.732 - 7965.145: 91.0045% ( 87) 00:06:55.322 7965.145 - 8015.557: 91.3783% ( 67) 00:06:55.322 8015.557 - 8065.969: 91.7578% ( 68) 00:06:55.322 8065.969 - 8116.382: 92.1205% ( 65) 00:06:55.322 8116.382 - 8166.794: 92.5000% ( 68) 00:06:55.322 8166.794 - 8217.206: 92.8739% ( 67) 00:06:55.322 8217.206 - 8267.618: 93.2143% ( 61) 00:06:55.322 8267.618 - 8318.031: 93.5324% ( 57) 00:06:55.322 8318.031 - 8368.443: 93.9342% ( 72) 00:06:55.322 8368.443 - 8418.855: 94.3248% ( 70) 00:06:55.322 8418.855 - 8469.268: 94.6763% ( 63) 00:06:55.322 8469.268 - 8519.680: 94.9888% ( 56) 00:06:55.322 8519.680 - 8570.092: 95.2902% ( 54) 00:06:55.322 8570.092 - 8620.505: 95.4576% ( 30) 00:06:55.322 8620.505 - 8670.917: 95.6641% ( 37) 00:06:55.322 8670.917 - 8721.329: 95.9821% ( 57) 00:06:55.322 8721.329 - 8771.742: 96.3393% ( 64) 00:06:55.322 8771.742 - 8822.154: 96.5904% ( 45) 00:06:55.322 8822.154 - 8872.566: 96.8415% ( 45) 00:06:55.322 8872.566 - 8922.978: 97.0312% ( 34) 00:06:55.322 8922.978 - 8973.391: 97.1763% ( 26) 00:06:55.322 8973.391 - 9023.803: 97.2935% ( 21) 00:06:55.322 9023.803 - 9074.215: 97.4442% ( 27) 00:06:55.322 9074.215 - 9124.628: 97.6060% ( 29) 00:06:55.322 9124.628 - 9175.040: 97.7511% ( 26) 00:06:55.322 9175.040 - 9225.452: 97.8906% ( 25) 00:06:55.322 9225.452 - 9275.865: 98.0357% ( 26) 00:06:55.322 9275.865 - 9326.277: 98.1641% ( 23) 00:06:55.322 9326.277 - 9376.689: 98.3594% ( 35) 00:06:55.322 9376.689 - 9427.102: 98.4487% ( 16) 00:06:55.322 9427.102 - 9477.514: 98.5268% ( 14) 00:06:55.322 9477.514 - 9527.926: 98.5938% ( 12) 00:06:55.322 9527.926 - 9578.338: 98.6551% ( 11) 00:06:55.322 9578.338 - 9628.751: 98.7444% ( 16) 00:06:55.322 9628.751 - 9679.163: 98.7946% ( 9) 00:06:55.322 9679.163 - 9729.575: 98.8449% ( 9) 00:06:55.322 9729.575 - 9779.988: 98.8895% ( 8) 00:06:55.322 9779.988 - 9830.400: 98.9286% ( 7) 00:06:55.322 9830.400 - 9880.812: 98.9676% ( 7) 00:06:55.322 9880.812 - 9931.225: 99.0458% ( 14) 00:06:55.322 9931.225 - 9981.637: 99.0681% ( 4) 00:06:55.322 9981.637 - 10032.049: 99.0904% ( 4) 00:06:55.322 10032.049 - 10082.462: 99.1071% ( 3) 00:06:55.322 10082.462 - 10132.874: 99.1295% ( 4) 00:06:55.322 10132.874 - 10183.286: 99.1574% ( 5) 00:06:55.322 10183.286 - 10233.698: 99.1797% ( 4) 00:06:55.322 10233.698 - 10284.111: 99.2076% ( 5) 00:06:55.322 10284.111 - 10334.523: 99.2299% ( 4) 00:06:55.322 10334.523 - 10384.935: 99.2634% ( 6) 00:06:55.322 10384.935 - 10435.348: 99.2857% ( 4) 00:06:55.322 24903.680 - 25004.505: 99.3304% ( 8) 00:06:55.322 25004.505 - 25105.329: 99.3750% ( 8) 00:06:55.322 25105.329 - 25206.154: 99.4252% ( 9) 00:06:55.322 25206.154 - 25306.978: 99.4475% ( 4) 00:06:55.322 25306.978 - 25407.803: 99.4587% ( 2) 00:06:55.322 25407.803 - 25508.628: 99.4754% ( 3) 00:06:55.322 25508.628 - 25609.452: 99.4978% ( 4) 00:06:55.322 25609.452 - 25710.277: 99.5089% ( 2) 00:06:55.322 25710.277 - 25811.102: 99.5201% ( 2) 00:06:55.322 25811.102 - 26012.751: 99.5592% ( 7) 00:06:55.322 26012.751 - 26214.400: 99.6038% ( 8) 00:06:55.322 26214.400 - 26416.049: 99.6429% ( 7) 00:06:55.322 29037.489 - 29239.138: 99.6708% ( 5) 00:06:55.322 29239.138 - 29440.788: 99.7154% ( 8) 00:06:55.322 29440.788 - 29642.437: 99.7600% ( 8) 00:06:55.322 29642.437 - 29844.086: 99.8047% ( 8) 00:06:55.322 29844.086 - 30045.735: 99.8438% ( 7) 00:06:55.322 30045.735 - 30247.385: 99.8884% ( 8) 00:06:55.322 30247.385 - 30449.034: 99.9330% ( 8) 00:06:55.322 30449.034 - 30650.683: 99.9777% ( 8) 00:06:55.322 30650.683 - 30852.332: 100.0000% ( 4) 00:06:55.322 00:06:55.322 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:06:55.322 ============================================================================== 00:06:55.322 Range in us Cumulative IO count 00:06:55.322 5948.652 - 5973.858: 0.0223% ( 4) 00:06:55.322 5973.858 - 5999.065: 0.0614% ( 7) 00:06:55.322 5999.065 - 6024.271: 0.1283% ( 12) 00:06:55.322 6024.271 - 6049.477: 0.2065% ( 14) 00:06:55.322 6049.477 - 6074.683: 0.2734% ( 12) 00:06:55.322 6074.683 - 6099.889: 0.3460% ( 13) 00:06:55.322 6099.889 - 6125.095: 0.4074% ( 11) 00:06:55.322 6125.095 - 6150.302: 0.5636% ( 28) 00:06:55.322 6150.302 - 6175.508: 0.7366% ( 31) 00:06:55.322 6175.508 - 6200.714: 0.8761% ( 25) 00:06:55.322 6200.714 - 6225.920: 1.0714% ( 35) 00:06:55.323 6225.920 - 6251.126: 1.3560% ( 51) 00:06:55.323 6251.126 - 6276.332: 1.6239% ( 48) 00:06:55.323 6276.332 - 6301.538: 2.0312% ( 73) 00:06:55.323 6301.538 - 6326.745: 2.4888% ( 82) 00:06:55.323 6326.745 - 6351.951: 2.9520% ( 83) 00:06:55.323 6351.951 - 6377.157: 3.4375% ( 87) 00:06:55.323 6377.157 - 6402.363: 3.9342% ( 89) 00:06:55.323 6402.363 - 6427.569: 4.6987% ( 137) 00:06:55.323 6427.569 - 6452.775: 5.5804% ( 158) 00:06:55.323 6452.775 - 6503.188: 8.1920% ( 468) 00:06:55.323 6503.188 - 6553.600: 11.9029% ( 665) 00:06:55.323 6553.600 - 6604.012: 17.2545% ( 959) 00:06:55.323 6604.012 - 6654.425: 23.4766% ( 1115) 00:06:55.323 6654.425 - 6704.837: 31.3672% ( 1414) 00:06:55.323 6704.837 - 6755.249: 38.5268% ( 1283) 00:06:55.323 6755.249 - 6805.662: 45.2511% ( 1205) 00:06:55.323 6805.662 - 6856.074: 51.8304% ( 1179) 00:06:55.323 6856.074 - 6906.486: 57.9241% ( 1092) 00:06:55.323 6906.486 - 6956.898: 63.9732% ( 1084) 00:06:55.323 6956.898 - 7007.311: 68.6272% ( 834) 00:06:55.323 7007.311 - 7057.723: 72.0480% ( 613) 00:06:55.323 7057.723 - 7108.135: 74.9888% ( 527) 00:06:55.323 7108.135 - 7158.548: 77.8404% ( 511) 00:06:55.323 7158.548 - 7208.960: 79.8549% ( 361) 00:06:55.323 7208.960 - 7259.372: 81.3449% ( 267) 00:06:55.323 7259.372 - 7309.785: 82.5949% ( 224) 00:06:55.323 7309.785 - 7360.197: 83.4598% ( 155) 00:06:55.323 7360.197 - 7410.609: 84.0737% ( 110) 00:06:55.323 7410.609 - 7461.022: 84.8996% ( 148) 00:06:55.323 7461.022 - 7511.434: 85.6585% ( 136) 00:06:55.323 7511.434 - 7561.846: 86.0993% ( 79) 00:06:55.323 7561.846 - 7612.258: 86.7411% ( 115) 00:06:55.323 7612.258 - 7662.671: 87.7400% ( 179) 00:06:55.323 7662.671 - 7713.083: 88.4040% ( 119) 00:06:55.323 7713.083 - 7763.495: 88.9007% ( 89) 00:06:55.323 7763.495 - 7813.908: 89.3471% ( 80) 00:06:55.323 7813.908 - 7864.320: 89.9051% ( 100) 00:06:55.323 7864.320 - 7914.732: 90.2400% ( 60) 00:06:55.323 7914.732 - 7965.145: 90.6585% ( 75) 00:06:55.323 7965.145 - 8015.557: 91.1440% ( 87) 00:06:55.323 8015.557 - 8065.969: 91.5458% ( 72) 00:06:55.323 8065.969 - 8116.382: 91.8471% ( 54) 00:06:55.323 8116.382 - 8166.794: 92.2321% ( 69) 00:06:55.323 8166.794 - 8217.206: 92.7009% ( 84) 00:06:55.323 8217.206 - 8267.618: 93.0748% ( 67) 00:06:55.323 8267.618 - 8318.031: 93.4208% ( 62) 00:06:55.323 8318.031 - 8368.443: 93.7667% ( 62) 00:06:55.323 8368.443 - 8418.855: 94.1406% ( 67) 00:06:55.323 8418.855 - 8469.268: 94.4531% ( 56) 00:06:55.323 8469.268 - 8519.680: 94.7321% ( 50) 00:06:55.323 8519.680 - 8570.092: 94.9665% ( 42) 00:06:55.323 8570.092 - 8620.505: 95.2400% ( 49) 00:06:55.323 8620.505 - 8670.917: 95.4632% ( 40) 00:06:55.323 8670.917 - 8721.329: 95.6920% ( 41) 00:06:55.323 8721.329 - 8771.742: 96.0045% ( 56) 00:06:55.323 8771.742 - 8822.154: 96.3393% ( 60) 00:06:55.323 8822.154 - 8872.566: 96.9085% ( 102) 00:06:55.323 8872.566 - 8922.978: 97.2266% ( 57) 00:06:55.323 8922.978 - 8973.391: 97.4665% ( 43) 00:06:55.323 8973.391 - 9023.803: 97.6786% ( 38) 00:06:55.323 9023.803 - 9074.215: 97.9241% ( 44) 00:06:55.323 9074.215 - 9124.628: 98.0692% ( 26) 00:06:55.323 9124.628 - 9175.040: 98.1752% ( 19) 00:06:55.323 9175.040 - 9225.452: 98.2924% ( 21) 00:06:55.323 9225.452 - 9275.865: 98.3873% ( 17) 00:06:55.323 9275.865 - 9326.277: 98.4933% ( 19) 00:06:55.323 9326.277 - 9376.689: 98.5826% ( 16) 00:06:55.323 9376.689 - 9427.102: 98.6775% ( 17) 00:06:55.323 9427.102 - 9477.514: 98.7444% ( 12) 00:06:55.323 9477.514 - 9527.926: 98.7891% ( 8) 00:06:55.323 9527.926 - 9578.338: 98.8393% ( 9) 00:06:55.323 9578.338 - 9628.751: 98.9453% ( 19) 00:06:55.323 9628.751 - 9679.163: 99.0681% ( 22) 00:06:55.323 9679.163 - 9729.575: 99.1016% ( 6) 00:06:55.323 9729.575 - 9779.988: 99.1295% ( 5) 00:06:55.323 9779.988 - 9830.400: 99.1629% ( 6) 00:06:55.323 9830.400 - 9880.812: 99.1908% ( 5) 00:06:55.323 9880.812 - 9931.225: 99.2132% ( 4) 00:06:55.323 9931.225 - 9981.637: 99.2467% ( 6) 00:06:55.323 9981.637 - 10032.049: 99.2746% ( 5) 00:06:55.323 10032.049 - 10082.462: 99.2857% ( 2) 00:06:55.323 22988.012 - 23088.837: 99.2969% ( 2) 00:06:55.323 23088.837 - 23189.662: 99.3192% ( 4) 00:06:55.323 23189.662 - 23290.486: 99.3415% ( 4) 00:06:55.323 23290.486 - 23391.311: 99.3694% ( 5) 00:06:55.323 23391.311 - 23492.135: 99.3917% ( 4) 00:06:55.323 23492.135 - 23592.960: 99.4141% ( 4) 00:06:55.323 23592.960 - 23693.785: 99.4364% ( 4) 00:06:55.323 23693.785 - 23794.609: 99.4587% ( 4) 00:06:55.323 23794.609 - 23895.434: 99.4810% ( 4) 00:06:55.323 23895.434 - 23996.258: 99.5033% ( 4) 00:06:55.323 23996.258 - 24097.083: 99.5257% ( 4) 00:06:55.323 24097.083 - 24197.908: 99.5480% ( 4) 00:06:55.323 24197.908 - 24298.732: 99.5703% ( 4) 00:06:55.323 24298.732 - 24399.557: 99.5926% ( 4) 00:06:55.323 24399.557 - 24500.382: 99.6150% ( 4) 00:06:55.323 24500.382 - 24601.206: 99.6373% ( 4) 00:06:55.323 24601.206 - 24702.031: 99.6429% ( 1) 00:06:55.323 27222.646 - 27424.295: 99.6596% ( 3) 00:06:55.323 27424.295 - 27625.945: 99.7042% ( 8) 00:06:55.323 27625.945 - 27827.594: 99.7489% ( 8) 00:06:55.323 27827.594 - 28029.243: 99.7935% ( 8) 00:06:55.323 28029.243 - 28230.892: 99.8438% ( 9) 00:06:55.323 28230.892 - 28432.542: 99.8884% ( 8) 00:06:55.323 28432.542 - 28634.191: 99.9275% ( 7) 00:06:55.323 28634.191 - 28835.840: 99.9721% ( 8) 00:06:55.323 28835.840 - 29037.489: 100.0000% ( 5) 00:06:55.323 00:06:55.323 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:06:55.323 ============================================================================== 00:06:55.323 Range in us Cumulative IO count 00:06:55.323 5822.622 - 5847.828: 0.0056% ( 1) 00:06:55.323 5847.828 - 5873.034: 0.0112% ( 1) 00:06:55.323 5898.240 - 5923.446: 0.0167% ( 1) 00:06:55.323 5948.652 - 5973.858: 0.0223% ( 1) 00:06:55.323 5973.858 - 5999.065: 0.0279% ( 1) 00:06:55.323 5999.065 - 6024.271: 0.0502% ( 4) 00:06:55.323 6024.271 - 6049.477: 0.0893% ( 7) 00:06:55.323 6049.477 - 6074.683: 0.1674% ( 14) 00:06:55.323 6074.683 - 6099.889: 0.2734% ( 19) 00:06:55.323 6099.889 - 6125.095: 0.4185% ( 26) 00:06:55.323 6125.095 - 6150.302: 0.5748% ( 28) 00:06:55.323 6150.302 - 6175.508: 0.7422% ( 30) 00:06:55.323 6175.508 - 6200.714: 1.0100% ( 48) 00:06:55.323 6200.714 - 6225.920: 1.3114% ( 54) 00:06:55.323 6225.920 - 6251.126: 1.5681% ( 46) 00:06:55.323 6251.126 - 6276.332: 1.8471% ( 50) 00:06:55.323 6276.332 - 6301.538: 2.2545% ( 73) 00:06:55.323 6301.538 - 6326.745: 2.7623% ( 91) 00:06:55.323 6326.745 - 6351.951: 3.2422% ( 86) 00:06:55.323 6351.951 - 6377.157: 3.7500% ( 91) 00:06:55.323 6377.157 - 6402.363: 4.3527% ( 108) 00:06:55.323 6402.363 - 6427.569: 5.1116% ( 136) 00:06:55.323 6427.569 - 6452.775: 6.1998% ( 195) 00:06:55.323 6452.775 - 6503.188: 8.9844% ( 499) 00:06:55.323 6503.188 - 6553.600: 13.2254% ( 760) 00:06:55.323 6553.600 - 6604.012: 18.1362% ( 880) 00:06:55.323 6604.012 - 6654.425: 24.2801% ( 1101) 00:06:55.323 6654.425 - 6704.837: 31.4118% ( 1278) 00:06:55.323 6704.837 - 6755.249: 40.2400% ( 1582) 00:06:55.323 6755.249 - 6805.662: 46.4788% ( 1118) 00:06:55.323 6805.662 - 6856.074: 52.5112% ( 1081) 00:06:55.323 6856.074 - 6906.486: 57.7176% ( 933) 00:06:55.323 6906.486 - 6956.898: 64.2634% ( 1173) 00:06:55.323 6956.898 - 7007.311: 68.3594% ( 734) 00:06:55.323 7007.311 - 7057.723: 71.6016% ( 581) 00:06:55.323 7057.723 - 7108.135: 74.9107% ( 593) 00:06:55.323 7108.135 - 7158.548: 77.1373% ( 399) 00:06:55.323 7158.548 - 7208.960: 78.9509% ( 325) 00:06:55.323 7208.960 - 7259.372: 80.2121% ( 226) 00:06:55.323 7259.372 - 7309.785: 81.2165% ( 180) 00:06:55.323 7309.785 - 7360.197: 82.7790% ( 280) 00:06:55.323 7360.197 - 7410.609: 84.1964% ( 254) 00:06:55.323 7410.609 - 7461.022: 84.8493% ( 117) 00:06:55.323 7461.022 - 7511.434: 85.8594% ( 181) 00:06:55.323 7511.434 - 7561.846: 86.6574% ( 143) 00:06:55.323 7561.846 - 7612.258: 87.2489% ( 106) 00:06:55.323 7612.258 - 7662.671: 87.9632% ( 128) 00:06:55.323 7662.671 - 7713.083: 88.4933% ( 95) 00:06:55.323 7713.083 - 7763.495: 89.1853% ( 124) 00:06:55.323 7763.495 - 7813.908: 89.8382% ( 117) 00:06:55.323 7813.908 - 7864.320: 90.3683% ( 95) 00:06:55.323 7864.320 - 7914.732: 90.7533% ( 69) 00:06:55.323 7914.732 - 7965.145: 91.0993% ( 62) 00:06:55.323 7965.145 - 8015.557: 91.4397% ( 61) 00:06:55.323 8015.557 - 8065.969: 91.7522% ( 56) 00:06:55.323 8065.969 - 8116.382: 92.2266% ( 85) 00:06:55.323 8116.382 - 8166.794: 92.7623% ( 96) 00:06:55.323 8166.794 - 8217.206: 93.2422% ( 86) 00:06:55.323 8217.206 - 8267.618: 93.6272% ( 69) 00:06:55.323 8267.618 - 8318.031: 93.9007% ( 49) 00:06:55.323 8318.031 - 8368.443: 94.3192% ( 75) 00:06:55.323 8368.443 - 8418.855: 94.5424% ( 40) 00:06:55.323 8418.855 - 8469.268: 94.8103% ( 48) 00:06:55.323 8469.268 - 8519.680: 94.9888% ( 32) 00:06:55.323 8519.680 - 8570.092: 95.2400% ( 45) 00:06:55.323 8570.092 - 8620.505: 95.4911% ( 45) 00:06:55.323 8620.505 - 8670.917: 95.6808% ( 34) 00:06:55.323 8670.917 - 8721.329: 95.9096% ( 41) 00:06:55.323 8721.329 - 8771.742: 96.0324% ( 22) 00:06:55.323 8771.742 - 8822.154: 96.1272% ( 17) 00:06:55.323 8822.154 - 8872.566: 96.2891% ( 29) 00:06:55.323 8872.566 - 8922.978: 96.4676% ( 32) 00:06:55.323 8922.978 - 8973.391: 96.6574% ( 34) 00:06:55.323 8973.391 - 9023.803: 96.8917% ( 42) 00:06:55.323 9023.803 - 9074.215: 97.0424% ( 27) 00:06:55.323 9074.215 - 9124.628: 97.1708% ( 23) 00:06:55.323 9124.628 - 9175.040: 97.3103% ( 25) 00:06:55.323 9175.040 - 9225.452: 97.4888% ( 32) 00:06:55.323 9225.452 - 9275.865: 97.7511% ( 47) 00:06:55.323 9275.865 - 9326.277: 97.8627% ( 20) 00:06:55.323 9326.277 - 9376.689: 98.0357% ( 31) 00:06:55.323 9376.689 - 9427.102: 98.1808% ( 26) 00:06:55.323 9427.102 - 9477.514: 98.3426% ( 29) 00:06:55.323 9477.514 - 9527.926: 98.4431% ( 18) 00:06:55.323 9527.926 - 9578.338: 98.5491% ( 19) 00:06:55.323 9578.338 - 9628.751: 98.6328% ( 15) 00:06:55.323 9628.751 - 9679.163: 98.7109% ( 14) 00:06:55.324 9679.163 - 9729.575: 98.7835% ( 13) 00:06:55.324 9729.575 - 9779.988: 98.8281% ( 8) 00:06:55.324 9779.988 - 9830.400: 98.8616% ( 6) 00:06:55.324 9830.400 - 9880.812: 98.8839% ( 4) 00:06:55.324 9880.812 - 9931.225: 98.9062% ( 4) 00:06:55.324 9931.225 - 9981.637: 98.9342% ( 5) 00:06:55.324 9981.637 - 10032.049: 98.9565% ( 4) 00:06:55.324 10032.049 - 10082.462: 98.9732% ( 3) 00:06:55.324 10082.462 - 10132.874: 98.9844% ( 2) 00:06:55.324 10132.874 - 10183.286: 99.0011% ( 3) 00:06:55.324 10183.286 - 10233.698: 99.0123% ( 2) 00:06:55.324 10233.698 - 10284.111: 99.0737% ( 11) 00:06:55.324 10284.111 - 10334.523: 99.1462% ( 13) 00:06:55.324 10334.523 - 10384.935: 99.2243% ( 14) 00:06:55.324 10384.935 - 10435.348: 99.2355% ( 2) 00:06:55.324 10435.348 - 10485.760: 99.2467% ( 2) 00:06:55.324 10485.760 - 10536.172: 99.2578% ( 2) 00:06:55.324 10536.172 - 10586.585: 99.2634% ( 1) 00:06:55.324 10586.585 - 10636.997: 99.2746% ( 2) 00:06:55.324 10636.997 - 10687.409: 99.2857% ( 2) 00:06:55.324 21374.818 - 21475.643: 99.3080% ( 4) 00:06:55.324 21475.643 - 21576.468: 99.3304% ( 4) 00:06:55.324 21576.468 - 21677.292: 99.3527% ( 4) 00:06:55.324 21677.292 - 21778.117: 99.3750% ( 4) 00:06:55.324 21778.117 - 21878.942: 99.3917% ( 3) 00:06:55.324 21878.942 - 21979.766: 99.4141% ( 4) 00:06:55.324 21979.766 - 22080.591: 99.4364% ( 4) 00:06:55.324 22080.591 - 22181.415: 99.4587% ( 4) 00:06:55.324 22181.415 - 22282.240: 99.4810% ( 4) 00:06:55.324 22282.240 - 22383.065: 99.5033% ( 4) 00:06:55.324 22383.065 - 22483.889: 99.5257% ( 4) 00:06:55.324 22483.889 - 22584.714: 99.5480% ( 4) 00:06:55.324 22584.714 - 22685.538: 99.5647% ( 3) 00:06:55.324 22685.538 - 22786.363: 99.5871% ( 4) 00:06:55.324 22786.363 - 22887.188: 99.6150% ( 5) 00:06:55.324 22887.188 - 22988.012: 99.6373% ( 4) 00:06:55.324 22988.012 - 23088.837: 99.6429% ( 1) 00:06:55.324 25609.452 - 25710.277: 99.6484% ( 1) 00:06:55.324 25710.277 - 25811.102: 99.6708% ( 4) 00:06:55.324 25811.102 - 26012.751: 99.7098% ( 7) 00:06:55.324 26012.751 - 26214.400: 99.7489% ( 7) 00:06:55.324 26214.400 - 26416.049: 99.7935% ( 8) 00:06:55.324 26416.049 - 26617.698: 99.8382% ( 8) 00:06:55.324 26617.698 - 26819.348: 99.8828% ( 8) 00:06:55.324 26819.348 - 27020.997: 99.9330% ( 9) 00:06:55.324 27020.997 - 27222.646: 99.9777% ( 8) 00:06:55.324 27222.646 - 27424.295: 100.0000% ( 4) 00:06:55.324 00:06:55.324 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:06:55.324 ============================================================================== 00:06:55.324 Range in us Cumulative IO count 00:06:55.324 5847.828 - 5873.034: 0.0056% ( 1) 00:06:55.324 5898.240 - 5923.446: 0.0112% ( 1) 00:06:55.324 5948.652 - 5973.858: 0.0391% ( 5) 00:06:55.324 5973.858 - 5999.065: 0.0614% ( 4) 00:06:55.324 5999.065 - 6024.271: 0.1004% ( 7) 00:06:55.324 6024.271 - 6049.477: 0.1562% ( 10) 00:06:55.324 6049.477 - 6074.683: 0.2176% ( 11) 00:06:55.324 6074.683 - 6099.889: 0.2902% ( 13) 00:06:55.324 6099.889 - 6125.095: 0.3627% ( 13) 00:06:55.324 6125.095 - 6150.302: 0.4743% ( 20) 00:06:55.324 6150.302 - 6175.508: 0.6083% ( 24) 00:06:55.324 6175.508 - 6200.714: 0.7478% ( 25) 00:06:55.324 6200.714 - 6225.920: 0.9710% ( 40) 00:06:55.324 6225.920 - 6251.126: 1.1830% ( 38) 00:06:55.324 6251.126 - 6276.332: 1.5067% ( 58) 00:06:55.324 6276.332 - 6301.538: 1.9029% ( 71) 00:06:55.324 6301.538 - 6326.745: 2.4330% ( 95) 00:06:55.324 6326.745 - 6351.951: 3.1808% ( 134) 00:06:55.324 6351.951 - 6377.157: 3.8002% ( 111) 00:06:55.324 6377.157 - 6402.363: 4.5089% ( 127) 00:06:55.324 6402.363 - 6427.569: 5.2232% ( 128) 00:06:55.324 6427.569 - 6452.775: 6.0324% ( 145) 00:06:55.324 6452.775 - 6503.188: 8.8783% ( 510) 00:06:55.324 6503.188 - 6553.600: 13.0525% ( 748) 00:06:55.324 6553.600 - 6604.012: 18.7388% ( 1019) 00:06:55.324 6604.012 - 6654.425: 24.4810% ( 1029) 00:06:55.324 6654.425 - 6704.837: 31.4732% ( 1253) 00:06:55.324 6704.837 - 6755.249: 39.7042% ( 1475) 00:06:55.324 6755.249 - 6805.662: 46.2221% ( 1168) 00:06:55.324 6805.662 - 6856.074: 51.8806% ( 1014) 00:06:55.324 6856.074 - 6906.486: 58.1641% ( 1126) 00:06:55.324 6906.486 - 6956.898: 64.2913% ( 1098) 00:06:55.324 6956.898 - 7007.311: 68.8839% ( 823) 00:06:55.324 7007.311 - 7057.723: 72.2879% ( 610) 00:06:55.324 7057.723 - 7108.135: 74.8549% ( 460) 00:06:55.324 7108.135 - 7158.548: 76.7746% ( 344) 00:06:55.324 7158.548 - 7208.960: 78.7779% ( 359) 00:06:55.324 7208.960 - 7259.372: 80.3292% ( 278) 00:06:55.324 7259.372 - 7309.785: 81.4844% ( 207) 00:06:55.324 7309.785 - 7360.197: 82.7679% ( 230) 00:06:55.324 7360.197 - 7410.609: 84.0904% ( 237) 00:06:55.324 7410.609 - 7461.022: 84.9554% ( 155) 00:06:55.324 7461.022 - 7511.434: 85.6808% ( 130) 00:06:55.324 7511.434 - 7561.846: 86.5960% ( 164) 00:06:55.324 7561.846 - 7612.258: 87.3270% ( 131) 00:06:55.324 7612.258 - 7662.671: 87.9967% ( 120) 00:06:55.324 7662.671 - 7713.083: 88.6105% ( 110) 00:06:55.324 7713.083 - 7763.495: 89.2243% ( 110) 00:06:55.324 7763.495 - 7813.908: 89.7154% ( 88) 00:06:55.324 7813.908 - 7864.320: 90.0558% ( 61) 00:06:55.324 7864.320 - 7914.732: 90.6083% ( 99) 00:06:55.324 7914.732 - 7965.145: 91.0212% ( 74) 00:06:55.324 7965.145 - 8015.557: 91.4397% ( 75) 00:06:55.324 8015.557 - 8065.969: 91.7746% ( 60) 00:06:55.324 8065.969 - 8116.382: 92.1875% ( 74) 00:06:55.324 8116.382 - 8166.794: 92.5112% ( 58) 00:06:55.324 8166.794 - 8217.206: 92.8795% ( 66) 00:06:55.324 8217.206 - 8267.618: 93.2143% ( 60) 00:06:55.324 8267.618 - 8318.031: 93.6384% ( 76) 00:06:55.324 8318.031 - 8368.443: 94.0737% ( 78) 00:06:55.324 8368.443 - 8418.855: 94.3527% ( 50) 00:06:55.324 8418.855 - 8469.268: 94.8047% ( 81) 00:06:55.324 8469.268 - 8519.680: 95.1060% ( 54) 00:06:55.324 8519.680 - 8570.092: 95.4241% ( 57) 00:06:55.324 8570.092 - 8620.505: 95.5804% ( 28) 00:06:55.324 8620.505 - 8670.917: 95.7031% ( 22) 00:06:55.324 8670.917 - 8721.329: 95.8594% ( 28) 00:06:55.324 8721.329 - 8771.742: 96.0770% ( 39) 00:06:55.324 8771.742 - 8822.154: 96.1942% ( 21) 00:06:55.324 8822.154 - 8872.566: 96.3114% ( 21) 00:06:55.324 8872.566 - 8922.978: 96.4286% ( 21) 00:06:55.324 8922.978 - 8973.391: 96.5848% ( 28) 00:06:55.324 8973.391 - 9023.803: 96.7243% ( 25) 00:06:55.324 9023.803 - 9074.215: 96.9308% ( 37) 00:06:55.324 9074.215 - 9124.628: 97.2489% ( 57) 00:06:55.324 9124.628 - 9175.040: 97.3940% ( 26) 00:06:55.324 9175.040 - 9225.452: 97.5502% ( 28) 00:06:55.324 9225.452 - 9275.865: 97.9018% ( 63) 00:06:55.324 9275.865 - 9326.277: 98.0190% ( 21) 00:06:55.324 9326.277 - 9376.689: 98.1027% ( 15) 00:06:55.324 9376.689 - 9427.102: 98.1696% ( 12) 00:06:55.324 9427.102 - 9477.514: 98.2589% ( 16) 00:06:55.324 9477.514 - 9527.926: 98.3817% ( 22) 00:06:55.324 9527.926 - 9578.338: 98.5324% ( 27) 00:06:55.324 9578.338 - 9628.751: 98.6217% ( 16) 00:06:55.324 9628.751 - 9679.163: 98.6663% ( 8) 00:06:55.324 9679.163 - 9729.575: 98.7054% ( 7) 00:06:55.324 9729.575 - 9779.988: 98.7388% ( 6) 00:06:55.324 9779.988 - 9830.400: 98.7723% ( 6) 00:06:55.324 9830.400 - 9880.812: 98.8114% ( 7) 00:06:55.324 9880.812 - 9931.225: 98.8393% ( 5) 00:06:55.324 9931.225 - 9981.637: 98.8839% ( 8) 00:06:55.324 9981.637 - 10032.049: 98.9342% ( 9) 00:06:55.324 10032.049 - 10082.462: 99.0513% ( 21) 00:06:55.324 10082.462 - 10132.874: 99.1797% ( 23) 00:06:55.324 10132.874 - 10183.286: 99.2020% ( 4) 00:06:55.324 10183.286 - 10233.698: 99.2188% ( 3) 00:06:55.324 10233.698 - 10284.111: 99.2355% ( 3) 00:06:55.324 10284.111 - 10334.523: 99.2467% ( 2) 00:06:55.324 10334.523 - 10384.935: 99.2578% ( 2) 00:06:55.324 10384.935 - 10435.348: 99.2746% ( 3) 00:06:55.324 10435.348 - 10485.760: 99.2857% ( 2) 00:06:55.324 19660.800 - 19761.625: 99.3025% ( 3) 00:06:55.324 19761.625 - 19862.449: 99.3248% ( 4) 00:06:55.324 19862.449 - 19963.274: 99.3527% ( 5) 00:06:55.324 19963.274 - 20064.098: 99.3750% ( 4) 00:06:55.324 20064.098 - 20164.923: 99.3973% ( 4) 00:06:55.324 20164.923 - 20265.748: 99.4196% ( 4) 00:06:55.324 20265.748 - 20366.572: 99.4420% ( 4) 00:06:55.324 20366.572 - 20467.397: 99.4587% ( 3) 00:06:55.324 20467.397 - 20568.222: 99.4810% ( 4) 00:06:55.324 20568.222 - 20669.046: 99.5033% ( 4) 00:06:55.324 20669.046 - 20769.871: 99.5257% ( 4) 00:06:55.324 20769.871 - 20870.695: 99.5480% ( 4) 00:06:55.324 20870.695 - 20971.520: 99.5703% ( 4) 00:06:55.324 20971.520 - 21072.345: 99.5926% ( 4) 00:06:55.324 21072.345 - 21173.169: 99.6205% ( 5) 00:06:55.324 21173.169 - 21273.994: 99.6429% ( 4) 00:06:55.324 23996.258 - 24097.083: 99.6540% ( 2) 00:06:55.324 24097.083 - 24197.908: 99.6763% ( 4) 00:06:55.324 24197.908 - 24298.732: 99.6987% ( 4) 00:06:55.324 24298.732 - 24399.557: 99.7210% ( 4) 00:06:55.324 24399.557 - 24500.382: 99.7433% ( 4) 00:06:55.324 24500.382 - 24601.206: 99.7656% ( 4) 00:06:55.324 24601.206 - 24702.031: 99.7824% ( 3) 00:06:55.324 24702.031 - 24802.855: 99.8047% ( 4) 00:06:55.324 24802.855 - 24903.680: 99.8270% ( 4) 00:06:55.324 24903.680 - 25004.505: 99.8493% ( 4) 00:06:55.324 25004.505 - 25105.329: 99.8717% ( 4) 00:06:55.324 25105.329 - 25206.154: 99.8996% ( 5) 00:06:55.324 25206.154 - 25306.978: 99.9219% ( 4) 00:06:55.324 25306.978 - 25407.803: 99.9442% ( 4) 00:06:55.324 25407.803 - 25508.628: 99.9665% ( 4) 00:06:55.324 25508.628 - 25609.452: 99.9944% ( 5) 00:06:55.324 25609.452 - 25710.277: 100.0000% ( 1) 00:06:55.324 00:06:55.324 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:06:55.324 ============================================================================== 00:06:55.324 Range in us Cumulative IO count 00:06:55.324 5822.622 - 5847.828: 0.0056% ( 1) 00:06:55.324 5847.828 - 5873.034: 0.0112% ( 1) 00:06:55.324 5948.652 - 5973.858: 0.0167% ( 1) 00:06:55.324 5999.065 - 6024.271: 0.0223% ( 1) 00:06:55.324 6024.271 - 6049.477: 0.0781% ( 10) 00:06:55.324 6049.477 - 6074.683: 0.1451% ( 12) 00:06:55.324 6074.683 - 6099.889: 0.2344% ( 16) 00:06:55.325 6099.889 - 6125.095: 0.3237% ( 16) 00:06:55.325 6125.095 - 6150.302: 0.4520% ( 23) 00:06:55.325 6150.302 - 6175.508: 0.6027% ( 27) 00:06:55.325 6175.508 - 6200.714: 0.7310% ( 23) 00:06:55.325 6200.714 - 6225.920: 0.9431% ( 38) 00:06:55.325 6225.920 - 6251.126: 1.3058% ( 65) 00:06:55.325 6251.126 - 6276.332: 1.5681% ( 47) 00:06:55.325 6276.332 - 6301.538: 1.9085% ( 61) 00:06:55.325 6301.538 - 6326.745: 2.3605% ( 81) 00:06:55.325 6326.745 - 6351.951: 2.9799% ( 111) 00:06:55.325 6351.951 - 6377.157: 3.4375% ( 82) 00:06:55.325 6377.157 - 6402.363: 4.0513% ( 110) 00:06:55.325 6402.363 - 6427.569: 4.8493% ( 143) 00:06:55.325 6427.569 - 6452.775: 5.9933% ( 205) 00:06:55.325 6452.775 - 6503.188: 8.5714% ( 462) 00:06:55.325 6503.188 - 6553.600: 12.5781% ( 718) 00:06:55.325 6553.600 - 6604.012: 18.0748% ( 985) 00:06:55.325 6604.012 - 6654.425: 24.9721% ( 1236) 00:06:55.325 6654.425 - 6704.837: 32.4777% ( 1345) 00:06:55.325 6704.837 - 6755.249: 39.1629% ( 1198) 00:06:55.325 6755.249 - 6805.662: 45.4018% ( 1118) 00:06:55.325 6805.662 - 6856.074: 51.6239% ( 1115) 00:06:55.325 6856.074 - 6906.486: 58.6272% ( 1255) 00:06:55.325 6906.486 - 6956.898: 64.2299% ( 1004) 00:06:55.325 6956.898 - 7007.311: 68.8839% ( 834) 00:06:55.325 7007.311 - 7057.723: 72.2600% ( 605) 00:06:55.325 7057.723 - 7108.135: 75.0446% ( 499) 00:06:55.325 7108.135 - 7158.548: 77.3438% ( 412) 00:06:55.325 7158.548 - 7208.960: 79.0011% ( 297) 00:06:55.325 7208.960 - 7259.372: 80.3516% ( 242) 00:06:55.325 7259.372 - 7309.785: 81.5960% ( 223) 00:06:55.325 7309.785 - 7360.197: 82.6004% ( 180) 00:06:55.325 7360.197 - 7410.609: 83.4933% ( 160) 00:06:55.325 7410.609 - 7461.022: 84.5480% ( 189) 00:06:55.325 7461.022 - 7511.434: 85.5022% ( 171) 00:06:55.325 7511.434 - 7561.846: 86.3560% ( 153) 00:06:55.325 7561.846 - 7612.258: 87.1429% ( 141) 00:06:55.325 7612.258 - 7662.671: 87.7121% ( 102) 00:06:55.325 7662.671 - 7713.083: 88.3203% ( 109) 00:06:55.325 7713.083 - 7763.495: 88.8393% ( 93) 00:06:55.325 7763.495 - 7813.908: 89.6261% ( 141) 00:06:55.325 7813.908 - 7864.320: 90.2455% ( 111) 00:06:55.325 7864.320 - 7914.732: 90.7031% ( 82) 00:06:55.325 7914.732 - 7965.145: 91.2277% ( 94) 00:06:55.325 7965.145 - 8015.557: 91.6574% ( 77) 00:06:55.325 8015.557 - 8065.969: 92.0312% ( 67) 00:06:55.325 8065.969 - 8116.382: 92.3326% ( 54) 00:06:55.325 8116.382 - 8166.794: 92.7902% ( 82) 00:06:55.325 8166.794 - 8217.206: 93.2310% ( 79) 00:06:55.325 8217.206 - 8267.618: 93.5268% ( 53) 00:06:55.325 8267.618 - 8318.031: 93.8225% ( 53) 00:06:55.325 8318.031 - 8368.443: 94.1741% ( 63) 00:06:55.325 8368.443 - 8418.855: 94.4141% ( 43) 00:06:55.325 8418.855 - 8469.268: 94.7433% ( 59) 00:06:55.325 8469.268 - 8519.680: 94.9777% ( 42) 00:06:55.325 8519.680 - 8570.092: 95.1339% ( 28) 00:06:55.325 8570.092 - 8620.505: 95.3013% ( 30) 00:06:55.325 8620.505 - 8670.917: 95.4688% ( 30) 00:06:55.325 8670.917 - 8721.329: 95.6808% ( 38) 00:06:55.325 8721.329 - 8771.742: 95.8705% ( 34) 00:06:55.325 8771.742 - 8822.154: 96.0882% ( 39) 00:06:55.325 8822.154 - 8872.566: 96.2891% ( 36) 00:06:55.325 8872.566 - 8922.978: 96.5290% ( 43) 00:06:55.325 8922.978 - 8973.391: 96.7746% ( 44) 00:06:55.325 8973.391 - 9023.803: 97.0424% ( 48) 00:06:55.325 9023.803 - 9074.215: 97.1763% ( 24) 00:06:55.325 9074.215 - 9124.628: 97.2991% ( 22) 00:06:55.325 9124.628 - 9175.040: 97.5335% ( 42) 00:06:55.325 9175.040 - 9225.452: 97.6060% ( 13) 00:06:55.325 9225.452 - 9275.865: 97.7009% ( 17) 00:06:55.325 9275.865 - 9326.277: 97.8571% ( 28) 00:06:55.325 9326.277 - 9376.689: 97.9241% ( 12) 00:06:55.325 9376.689 - 9427.102: 97.9855% ( 11) 00:06:55.325 9427.102 - 9477.514: 98.0580% ( 13) 00:06:55.325 9477.514 - 9527.926: 98.2533% ( 35) 00:06:55.325 9527.926 - 9578.338: 98.4989% ( 44) 00:06:55.325 9578.338 - 9628.751: 98.6551% ( 28) 00:06:55.325 9628.751 - 9679.163: 98.7277% ( 13) 00:06:55.325 9679.163 - 9729.575: 98.7835% ( 10) 00:06:55.325 9729.575 - 9779.988: 98.8393% ( 10) 00:06:55.325 9779.988 - 9830.400: 98.8839% ( 8) 00:06:55.325 9830.400 - 9880.812: 98.9286% ( 8) 00:06:55.325 9880.812 - 9931.225: 98.9844% ( 10) 00:06:55.325 9931.225 - 9981.637: 99.0179% ( 6) 00:06:55.325 9981.637 - 10032.049: 99.0513% ( 6) 00:06:55.325 10032.049 - 10082.462: 99.0904% ( 7) 00:06:55.325 10082.462 - 10132.874: 99.1239% ( 6) 00:06:55.325 10132.874 - 10183.286: 99.1574% ( 6) 00:06:55.325 10183.286 - 10233.698: 99.1908% ( 6) 00:06:55.325 10233.698 - 10284.111: 99.2243% ( 6) 00:06:55.325 10284.111 - 10334.523: 99.2411% ( 3) 00:06:55.325 10334.523 - 10384.935: 99.2522% ( 2) 00:06:55.325 10384.935 - 10435.348: 99.2634% ( 2) 00:06:55.325 10435.348 - 10485.760: 99.2801% ( 3) 00:06:55.325 10485.760 - 10536.172: 99.2857% ( 1) 00:06:55.325 17845.957 - 17946.782: 99.2969% ( 2) 00:06:55.325 17946.782 - 18047.606: 99.3192% ( 4) 00:06:55.325 18047.606 - 18148.431: 99.3415% ( 4) 00:06:55.325 18148.431 - 18249.255: 99.3694% ( 5) 00:06:55.325 18249.255 - 18350.080: 99.3917% ( 4) 00:06:55.325 18350.080 - 18450.905: 99.4141% ( 4) 00:06:55.325 18450.905 - 18551.729: 99.4364% ( 4) 00:06:55.325 18551.729 - 18652.554: 99.4587% ( 4) 00:06:55.325 18652.554 - 18753.378: 99.4810% ( 4) 00:06:55.325 18753.378 - 18854.203: 99.5033% ( 4) 00:06:55.325 18854.203 - 18955.028: 99.5201% ( 3) 00:06:55.325 18955.028 - 19055.852: 99.5424% ( 4) 00:06:55.325 19055.852 - 19156.677: 99.5592% ( 3) 00:06:55.325 19156.677 - 19257.502: 99.5815% ( 4) 00:06:55.325 19257.502 - 19358.326: 99.6038% ( 4) 00:06:55.325 19358.326 - 19459.151: 99.6317% ( 5) 00:06:55.325 19459.151 - 19559.975: 99.6429% ( 2) 00:06:55.325 22282.240 - 22383.065: 99.6484% ( 1) 00:06:55.325 22383.065 - 22483.889: 99.6708% ( 4) 00:06:55.325 22483.889 - 22584.714: 99.6931% ( 4) 00:06:55.325 22584.714 - 22685.538: 99.7154% ( 4) 00:06:55.325 22685.538 - 22786.363: 99.7321% ( 3) 00:06:55.325 22786.363 - 22887.188: 99.7545% ( 4) 00:06:55.325 22887.188 - 22988.012: 99.7768% ( 4) 00:06:55.325 22988.012 - 23088.837: 99.7991% ( 4) 00:06:55.325 23088.837 - 23189.662: 99.8214% ( 4) 00:06:55.325 23189.662 - 23290.486: 99.8382% ( 3) 00:06:55.325 23290.486 - 23391.311: 99.8605% ( 4) 00:06:55.325 23391.311 - 23492.135: 99.8828% ( 4) 00:06:55.325 23492.135 - 23592.960: 99.9107% ( 5) 00:06:55.325 23592.960 - 23693.785: 99.9330% ( 4) 00:06:55.325 23693.785 - 23794.609: 99.9554% ( 4) 00:06:55.325 23794.609 - 23895.434: 99.9777% ( 4) 00:06:55.325 23895.434 - 23996.258: 100.0000% ( 4) 00:06:55.325 00:06:55.325 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:06:55.325 ============================================================================== 00:06:55.325 Range in us Cumulative IO count 00:06:55.325 5898.240 - 5923.446: 0.0056% ( 1) 00:06:55.325 5973.858 - 5999.065: 0.0222% ( 3) 00:06:55.325 5999.065 - 6024.271: 0.0612% ( 7) 00:06:55.325 6024.271 - 6049.477: 0.1001% ( 7) 00:06:55.325 6049.477 - 6074.683: 0.1668% ( 12) 00:06:55.325 6074.683 - 6099.889: 0.2613% ( 17) 00:06:55.325 6099.889 - 6125.095: 0.3726% ( 20) 00:06:55.325 6125.095 - 6150.302: 0.4838% ( 20) 00:06:55.325 6150.302 - 6175.508: 0.6506% ( 30) 00:06:55.325 6175.508 - 6200.714: 0.8563% ( 37) 00:06:55.325 6200.714 - 6225.920: 1.0621% ( 37) 00:06:55.325 6225.920 - 6251.126: 1.4012% ( 61) 00:06:55.325 6251.126 - 6276.332: 1.8183% ( 75) 00:06:55.325 6276.332 - 6301.538: 2.3354% ( 93) 00:06:55.325 6301.538 - 6326.745: 2.6412% ( 55) 00:06:55.325 6326.745 - 6351.951: 3.0194% ( 68) 00:06:55.325 6351.951 - 6377.157: 3.4419% ( 76) 00:06:55.325 6377.157 - 6402.363: 4.0814% ( 115) 00:06:55.325 6402.363 - 6427.569: 4.9655% ( 159) 00:06:55.325 6427.569 - 6452.775: 6.2333% ( 228) 00:06:55.325 6452.775 - 6503.188: 8.4631% ( 401) 00:06:55.325 6503.188 - 6553.600: 12.4555% ( 718) 00:06:55.325 6553.600 - 6604.012: 17.9827% ( 994) 00:06:55.325 6604.012 - 6654.425: 24.2493% ( 1127) 00:06:55.325 6654.425 - 6704.837: 32.2286% ( 1435) 00:06:55.325 6704.837 - 6755.249: 38.5398% ( 1135) 00:06:55.325 6755.249 - 6805.662: 45.2180% ( 1201) 00:06:55.325 6805.662 - 6856.074: 52.0240% ( 1224) 00:06:55.325 6856.074 - 6906.486: 57.7402% ( 1028) 00:06:55.325 6906.486 - 6956.898: 63.5621% ( 1047) 00:06:55.325 6956.898 - 7007.311: 67.9159% ( 783) 00:06:55.325 7007.311 - 7057.723: 72.3699% ( 801) 00:06:55.325 7057.723 - 7108.135: 75.5672% ( 575) 00:06:55.325 7108.135 - 7158.548: 77.8192% ( 405) 00:06:55.325 7158.548 - 7208.960: 79.2871% ( 264) 00:06:55.325 7208.960 - 7259.372: 80.5160% ( 221) 00:06:55.325 7259.372 - 7309.785: 81.8172% ( 234) 00:06:55.325 7309.785 - 7360.197: 82.7013% ( 159) 00:06:55.325 7360.197 - 7410.609: 83.5965% ( 161) 00:06:55.325 7410.609 - 7461.022: 84.5029% ( 163) 00:06:55.325 7461.022 - 7511.434: 85.4704% ( 174) 00:06:55.325 7511.434 - 7561.846: 86.2544% ( 141) 00:06:55.325 7561.846 - 7612.258: 86.8883% ( 114) 00:06:55.325 7612.258 - 7662.671: 87.5056% ( 111) 00:06:55.325 7662.671 - 7713.083: 87.8781% ( 67) 00:06:55.325 7713.083 - 7763.495: 88.2395% ( 65) 00:06:55.325 7763.495 - 7813.908: 88.8456% ( 109) 00:06:55.325 7813.908 - 7864.320: 89.4517% ( 109) 00:06:55.325 7864.320 - 7914.732: 89.9244% ( 85) 00:06:55.325 7914.732 - 7965.145: 90.3637% ( 79) 00:06:55.325 7965.145 - 8015.557: 91.0254% ( 119) 00:06:55.325 8015.557 - 8065.969: 91.8205% ( 143) 00:06:55.325 8065.969 - 8116.382: 92.6157% ( 143) 00:06:55.325 8116.382 - 8166.794: 92.9215% ( 55) 00:06:55.325 8166.794 - 8217.206: 93.2218% ( 54) 00:06:55.325 8217.206 - 8267.618: 93.5443% ( 58) 00:06:55.325 8267.618 - 8318.031: 93.9168% ( 67) 00:06:55.325 8318.031 - 8368.443: 94.2226% ( 55) 00:06:55.325 8368.443 - 8418.855: 94.5173% ( 53) 00:06:55.325 8418.855 - 8469.268: 94.7120% ( 35) 00:06:55.325 8469.268 - 8519.680: 94.9121% ( 36) 00:06:55.325 8519.680 - 8570.092: 95.1234% ( 38) 00:06:55.325 8570.092 - 8620.505: 95.3570% ( 42) 00:06:55.325 8620.505 - 8670.917: 95.6016% ( 44) 00:06:55.325 8670.917 - 8721.329: 95.8964% ( 53) 00:06:55.326 8721.329 - 8771.742: 96.1855% ( 52) 00:06:55.326 8771.742 - 8822.154: 96.3134% ( 23) 00:06:55.326 8822.154 - 8872.566: 96.4691% ( 28) 00:06:55.326 8872.566 - 8922.978: 96.6526% ( 33) 00:06:55.326 8922.978 - 8973.391: 96.8806% ( 41) 00:06:55.326 8973.391 - 9023.803: 97.2698% ( 70) 00:06:55.326 9023.803 - 9074.215: 97.3977% ( 23) 00:06:55.326 9074.215 - 9124.628: 97.6868% ( 52) 00:06:55.326 9124.628 - 9175.040: 97.8092% ( 22) 00:06:55.326 9175.040 - 9225.452: 97.9593% ( 27) 00:06:55.326 9225.452 - 9275.865: 98.0761% ( 21) 00:06:55.326 9275.865 - 9326.277: 98.1873% ( 20) 00:06:55.326 9326.277 - 9376.689: 98.3263% ( 25) 00:06:55.326 9376.689 - 9427.102: 98.3763% ( 9) 00:06:55.326 9427.102 - 9477.514: 98.4264% ( 9) 00:06:55.326 9477.514 - 9527.926: 98.5098% ( 15) 00:06:55.326 9527.926 - 9578.338: 98.6266% ( 21) 00:06:55.326 9578.338 - 9628.751: 98.7433% ( 21) 00:06:55.326 9628.751 - 9679.163: 98.8545% ( 20) 00:06:55.326 9679.163 - 9729.575: 98.9657% ( 20) 00:06:55.326 9729.575 - 9779.988: 99.0102% ( 8) 00:06:55.326 9779.988 - 9830.400: 99.0436% ( 6) 00:06:55.326 9830.400 - 9880.812: 99.0658% ( 4) 00:06:55.326 9880.812 - 9931.225: 99.0992% ( 6) 00:06:55.326 9931.225 - 9981.637: 99.1214% ( 4) 00:06:55.326 9981.637 - 10032.049: 99.1437% ( 4) 00:06:55.326 10032.049 - 10082.462: 99.1659% ( 4) 00:06:55.326 10082.462 - 10132.874: 99.1937% ( 5) 00:06:55.326 10132.874 - 10183.286: 99.2160% ( 4) 00:06:55.326 10183.286 - 10233.698: 99.2327% ( 3) 00:06:55.326 10233.698 - 10284.111: 99.2438% ( 2) 00:06:55.326 10284.111 - 10334.523: 99.2605% ( 3) 00:06:55.326 10334.523 - 10384.935: 99.2716% ( 2) 00:06:55.326 10384.935 - 10435.348: 99.2883% ( 3) 00:06:55.326 12905.551 - 13006.375: 99.3049% ( 3) 00:06:55.326 13006.375 - 13107.200: 99.3272% ( 4) 00:06:55.326 13107.200 - 13208.025: 99.3494% ( 4) 00:06:55.326 13208.025 - 13308.849: 99.3772% ( 5) 00:06:55.326 13308.849 - 13409.674: 99.3995% ( 4) 00:06:55.326 13409.674 - 13510.498: 99.4217% ( 4) 00:06:55.326 13510.498 - 13611.323: 99.4440% ( 4) 00:06:55.326 13611.323 - 13712.148: 99.4662% ( 4) 00:06:55.326 13712.148 - 13812.972: 99.4940% ( 5) 00:06:55.326 13812.972 - 13913.797: 99.5162% ( 4) 00:06:55.326 13913.797 - 14014.622: 99.5385% ( 4) 00:06:55.326 14014.622 - 14115.446: 99.5607% ( 4) 00:06:55.326 14115.446 - 14216.271: 99.5830% ( 4) 00:06:55.326 14216.271 - 14317.095: 99.6052% ( 4) 00:06:55.326 14317.095 - 14417.920: 99.6274% ( 4) 00:06:55.326 14417.920 - 14518.745: 99.6441% ( 3) 00:06:55.326 17241.009 - 17341.834: 99.6608% ( 3) 00:06:55.326 17341.834 - 17442.658: 99.6831% ( 4) 00:06:55.326 17442.658 - 17543.483: 99.7053% ( 4) 00:06:55.326 17543.483 - 17644.308: 99.7275% ( 4) 00:06:55.326 17644.308 - 17745.132: 99.7498% ( 4) 00:06:55.326 17745.132 - 17845.957: 99.7776% ( 5) 00:06:55.326 17845.957 - 17946.782: 99.7998% ( 4) 00:06:55.326 17946.782 - 18047.606: 99.8221% ( 4) 00:06:55.326 18047.606 - 18148.431: 99.8443% ( 4) 00:06:55.326 18148.431 - 18249.255: 99.8665% ( 4) 00:06:55.326 18249.255 - 18350.080: 99.8888% ( 4) 00:06:55.326 18350.080 - 18450.905: 99.9110% ( 4) 00:06:55.326 18450.905 - 18551.729: 99.9333% ( 4) 00:06:55.326 18551.729 - 18652.554: 99.9555% ( 4) 00:06:55.326 18652.554 - 18753.378: 99.9833% ( 5) 00:06:55.326 18753.378 - 18854.203: 100.0000% ( 3) 00:06:55.326 00:06:55.326 17:10:38 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:06:55.326 00:06:55.326 real 0m2.514s 00:06:55.326 user 0m2.205s 00:06:55.326 sys 0m0.207s 00:06:55.326 17:10:38 nvme.nvme_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:55.326 17:10:38 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.326 ************************************ 00:06:55.326 END TEST nvme_perf 00:06:55.326 ************************************ 00:06:55.584 17:10:38 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:06:55.584 17:10:38 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:55.584 17:10:38 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:55.584 17:10:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.584 ************************************ 00:06:55.584 START TEST nvme_hello_world 00:06:55.584 ************************************ 00:06:55.584 17:10:38 nvme.nvme_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:06:55.584 Initializing NVMe Controllers 00:06:55.584 Attached to 0000:00:10.0 00:06:55.584 Namespace ID: 1 size: 6GB 00:06:55.584 Attached to 0000:00:11.0 00:06:55.584 Namespace ID: 1 size: 5GB 00:06:55.584 Attached to 0000:00:13.0 00:06:55.584 Namespace ID: 1 size: 1GB 00:06:55.584 Attached to 0000:00:12.0 00:06:55.584 Namespace ID: 1 size: 4GB 00:06:55.584 Namespace ID: 2 size: 4GB 00:06:55.584 Namespace ID: 3 size: 4GB 00:06:55.584 Initialization complete. 00:06:55.584 INFO: using host memory buffer for IO 00:06:55.584 Hello world! 00:06:55.584 INFO: using host memory buffer for IO 00:06:55.584 Hello world! 00:06:55.584 INFO: using host memory buffer for IO 00:06:55.584 Hello world! 00:06:55.584 INFO: using host memory buffer for IO 00:06:55.584 Hello world! 00:06:55.584 INFO: using host memory buffer for IO 00:06:55.584 Hello world! 00:06:55.584 INFO: using host memory buffer for IO 00:06:55.584 Hello world! 00:06:55.584 00:06:55.584 real 0m0.234s 00:06:55.584 user 0m0.086s 00:06:55.584 sys 0m0.099s 00:06:55.584 17:10:38 nvme.nvme_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:55.842 17:10:38 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:55.842 ************************************ 00:06:55.842 END TEST nvme_hello_world 00:06:55.842 ************************************ 00:06:55.842 17:10:38 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:06:55.842 17:10:38 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:55.842 17:10:38 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:55.842 17:10:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.842 ************************************ 00:06:55.842 START TEST nvme_sgl 00:06:55.842 ************************************ 00:06:55.842 17:10:38 nvme.nvme_sgl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:06:55.842 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:06:55.842 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:06:55.842 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:06:55.842 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:06:55.842 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:06:55.842 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:06:55.842 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:06:55.842 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:06:55.842 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:06:56.102 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:06:56.102 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:06:56.102 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:06:56.102 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:06:56.102 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:06:56.102 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:06:56.102 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:06:56.102 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:06:56.102 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:06:56.102 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:06:56.102 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:06:56.102 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:06:56.102 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:06:56.102 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:06:56.102 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:06:56.102 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:06:56.102 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:06:56.102 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:06:56.102 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:06:56.102 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:06:56.102 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:06:56.102 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:06:56.102 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:06:56.102 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:06:56.102 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:06:56.102 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:06:56.102 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:06:56.102 NVMe Readv/Writev Request test 00:06:56.102 Attached to 0000:00:10.0 00:06:56.102 Attached to 0000:00:11.0 00:06:56.102 Attached to 0000:00:13.0 00:06:56.102 Attached to 0000:00:12.0 00:06:56.102 0000:00:10.0: build_io_request_2 test passed 00:06:56.102 0000:00:10.0: build_io_request_4 test passed 00:06:56.102 0000:00:10.0: build_io_request_5 test passed 00:06:56.102 0000:00:10.0: build_io_request_6 test passed 00:06:56.102 0000:00:10.0: build_io_request_7 test passed 00:06:56.102 0000:00:10.0: build_io_request_10 test passed 00:06:56.102 0000:00:11.0: build_io_request_2 test passed 00:06:56.102 0000:00:11.0: build_io_request_4 test passed 00:06:56.102 0000:00:11.0: build_io_request_5 test passed 00:06:56.102 0000:00:11.0: build_io_request_6 test passed 00:06:56.102 0000:00:11.0: build_io_request_7 test passed 00:06:56.102 0000:00:11.0: build_io_request_10 test passed 00:06:56.102 Cleaning up... 00:06:56.102 00:06:56.102 real 0m0.272s 00:06:56.102 user 0m0.141s 00:06:56.102 sys 0m0.092s 00:06:56.102 17:10:38 nvme.nvme_sgl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.102 17:10:38 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:06:56.102 ************************************ 00:06:56.102 END TEST nvme_sgl 00:06:56.102 ************************************ 00:06:56.102 17:10:38 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:06:56.102 17:10:38 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:56.102 17:10:38 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.102 17:10:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.102 ************************************ 00:06:56.102 START TEST nvme_e2edp 00:06:56.102 ************************************ 00:06:56.103 17:10:38 nvme.nvme_e2edp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:06:56.364 NVMe Write/Read with End-to-End data protection test 00:06:56.364 Attached to 0000:00:10.0 00:06:56.364 Attached to 0000:00:11.0 00:06:56.364 Attached to 0000:00:13.0 00:06:56.364 Attached to 0000:00:12.0 00:06:56.364 Cleaning up... 00:06:56.364 00:06:56.364 real 0m0.203s 00:06:56.364 user 0m0.076s 00:06:56.364 sys 0m0.085s 00:06:56.364 17:10:39 nvme.nvme_e2edp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.364 17:10:39 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:06:56.364 ************************************ 00:06:56.364 END TEST nvme_e2edp 00:06:56.364 ************************************ 00:06:56.364 17:10:39 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:06:56.364 17:10:39 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:56.364 17:10:39 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.364 17:10:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.364 ************************************ 00:06:56.364 START TEST nvme_reserve 00:06:56.364 ************************************ 00:06:56.364 17:10:39 nvme.nvme_reserve -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:06:56.622 ===================================================== 00:06:56.622 NVMe Controller at PCI bus 0, device 16, function 0 00:06:56.622 ===================================================== 00:06:56.622 Reservations: Not Supported 00:06:56.622 ===================================================== 00:06:56.622 NVMe Controller at PCI bus 0, device 17, function 0 00:06:56.622 ===================================================== 00:06:56.622 Reservations: Not Supported 00:06:56.623 ===================================================== 00:06:56.623 NVMe Controller at PCI bus 0, device 19, function 0 00:06:56.623 ===================================================== 00:06:56.623 Reservations: Not Supported 00:06:56.623 ===================================================== 00:06:56.623 NVMe Controller at PCI bus 0, device 18, function 0 00:06:56.623 ===================================================== 00:06:56.623 Reservations: Not Supported 00:06:56.623 Reservation test passed 00:06:56.623 00:06:56.623 real 0m0.212s 00:06:56.623 user 0m0.072s 00:06:56.623 sys 0m0.094s 00:06:56.623 17:10:39 nvme.nvme_reserve -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.623 17:10:39 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:06:56.623 ************************************ 00:06:56.623 END TEST nvme_reserve 00:06:56.623 ************************************ 00:06:56.623 17:10:39 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:06:56.623 17:10:39 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:56.623 17:10:39 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.623 17:10:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.623 ************************************ 00:06:56.623 START TEST nvme_err_injection 00:06:56.623 ************************************ 00:06:56.623 17:10:39 nvme.nvme_err_injection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:06:56.623 NVMe Error Injection test 00:06:56.623 Attached to 0000:00:10.0 00:06:56.623 Attached to 0000:00:11.0 00:06:56.623 Attached to 0000:00:13.0 00:06:56.623 Attached to 0000:00:12.0 00:06:56.623 0000:00:13.0: get features failed as expected 00:06:56.623 0000:00:12.0: get features failed as expected 00:06:56.623 0000:00:10.0: get features failed as expected 00:06:56.623 0000:00:11.0: get features failed as expected 00:06:56.623 0000:00:10.0: get features successfully as expected 00:06:56.623 0000:00:11.0: get features successfully as expected 00:06:56.623 0000:00:13.0: get features successfully as expected 00:06:56.623 0000:00:12.0: get features successfully as expected 00:06:56.623 0000:00:10.0: read failed as expected 00:06:56.623 0000:00:11.0: read failed as expected 00:06:56.623 0000:00:13.0: read failed as expected 00:06:56.623 0000:00:12.0: read failed as expected 00:06:56.623 0000:00:13.0: read successfully as expected 00:06:56.623 0000:00:10.0: read successfully as expected 00:06:56.623 0000:00:11.0: read successfully as expected 00:06:56.623 0000:00:12.0: read successfully as expected 00:06:56.623 Cleaning up... 00:06:56.883 00:06:56.883 real 0m0.222s 00:06:56.883 user 0m0.085s 00:06:56.883 sys 0m0.090s 00:06:56.883 17:10:39 nvme.nvme_err_injection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.883 ************************************ 00:06:56.883 END TEST nvme_err_injection 00:06:56.883 17:10:39 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:06:56.883 ************************************ 00:06:56.883 17:10:39 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:06:56.883 17:10:39 nvme -- common/autotest_common.sh@1103 -- # '[' 9 -le 1 ']' 00:06:56.883 17:10:39 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.883 17:10:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.883 ************************************ 00:06:56.883 START TEST nvme_overhead 00:06:56.883 ************************************ 00:06:56.883 17:10:39 nvme.nvme_overhead -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:06:58.268 Initializing NVMe Controllers 00:06:58.268 Attached to 0000:00:10.0 00:06:58.268 Attached to 0000:00:11.0 00:06:58.268 Attached to 0000:00:13.0 00:06:58.268 Attached to 0000:00:12.0 00:06:58.268 Initialization complete. Launching workers. 00:06:58.268 submit (in ns) avg, min, max = 11383.7, 10741.5, 216999.2 00:06:58.268 complete (in ns) avg, min, max = 7688.1, 7203.1, 125946.9 00:06:58.268 00:06:58.268 Submit histogram 00:06:58.268 ================ 00:06:58.268 Range in us Cumulative Count 00:06:58.268 10.732 - 10.782: 0.1338% ( 23) 00:06:58.268 10.782 - 10.831: 1.0705% ( 161) 00:06:58.268 10.831 - 10.880: 4.3635% ( 566) 00:06:58.268 10.880 - 10.929: 11.8629% ( 1289) 00:06:58.268 10.929 - 10.978: 23.8829% ( 2066) 00:06:58.268 10.978 - 11.028: 38.6258% ( 2534) 00:06:58.268 11.028 - 11.077: 53.2232% ( 2509) 00:06:58.268 11.077 - 11.126: 64.9465% ( 2015) 00:06:58.268 11.126 - 11.175: 73.6677% ( 1499) 00:06:58.268 11.175 - 11.225: 79.3402% ( 975) 00:06:58.268 11.225 - 11.274: 83.1510% ( 655) 00:06:58.268 11.274 - 11.323: 85.7051% ( 439) 00:06:58.268 11.323 - 11.372: 87.7298% ( 348) 00:06:58.268 11.372 - 11.422: 89.3356% ( 276) 00:06:58.268 11.422 - 11.471: 90.6679% ( 229) 00:06:58.268 11.471 - 11.520: 91.5581% ( 153) 00:06:58.268 11.520 - 11.569: 92.2446% ( 118) 00:06:58.268 11.569 - 11.618: 92.7624% ( 89) 00:06:58.268 11.618 - 11.668: 93.1173% ( 61) 00:06:58.268 11.668 - 11.717: 93.4896% ( 64) 00:06:58.268 11.717 - 11.766: 93.7747% ( 49) 00:06:58.268 11.766 - 11.815: 93.9900% ( 37) 00:06:58.268 11.815 - 11.865: 94.2227% ( 40) 00:06:58.268 11.865 - 11.914: 94.5136% ( 50) 00:06:58.268 11.914 - 11.963: 94.8045% ( 50) 00:06:58.268 11.963 - 12.012: 95.0605% ( 44) 00:06:58.268 12.012 - 12.062: 95.3398% ( 48) 00:06:58.268 12.062 - 12.111: 95.6249% ( 49) 00:06:58.268 12.111 - 12.160: 95.8518% ( 39) 00:06:58.268 12.160 - 12.209: 95.9856% ( 23) 00:06:58.268 12.209 - 12.258: 96.0612% ( 13) 00:06:58.268 12.258 - 12.308: 96.1194% ( 10) 00:06:58.268 12.308 - 12.357: 96.1950% ( 13) 00:06:58.268 12.357 - 12.406: 96.2416% ( 8) 00:06:58.268 12.406 - 12.455: 96.2648% ( 4) 00:06:58.268 12.455 - 12.505: 96.2997% ( 6) 00:06:58.268 12.505 - 12.554: 96.3172% ( 3) 00:06:58.268 12.554 - 12.603: 96.3405% ( 4) 00:06:58.268 12.603 - 12.702: 96.3579% ( 3) 00:06:58.268 12.702 - 12.800: 96.3870% ( 5) 00:06:58.268 12.800 - 12.898: 96.3987% ( 2) 00:06:58.268 12.898 - 12.997: 96.4452% ( 8) 00:06:58.268 12.997 - 13.095: 96.5499% ( 18) 00:06:58.268 13.095 - 13.194: 96.6837% ( 23) 00:06:58.268 13.194 - 13.292: 96.7885% ( 18) 00:06:58.268 13.292 - 13.391: 96.8874% ( 17) 00:06:58.268 13.391 - 13.489: 96.9572% ( 12) 00:06:58.268 13.489 - 13.588: 97.0328% ( 13) 00:06:58.268 13.588 - 13.686: 97.0968% ( 11) 00:06:58.268 13.686 - 13.785: 97.1434% ( 8) 00:06:58.268 13.785 - 13.883: 97.1783% ( 6) 00:06:58.268 13.883 - 13.982: 97.2364% ( 10) 00:06:58.268 13.982 - 14.080: 97.2597% ( 4) 00:06:58.268 14.080 - 14.178: 97.3004% ( 7) 00:06:58.268 14.178 - 14.277: 97.3354% ( 6) 00:06:58.268 14.277 - 14.375: 97.3586% ( 4) 00:06:58.268 14.375 - 14.474: 97.3877% ( 5) 00:06:58.268 14.474 - 14.572: 97.4401% ( 9) 00:06:58.268 14.572 - 14.671: 97.4866% ( 8) 00:06:58.268 14.671 - 14.769: 97.5448% ( 10) 00:06:58.268 14.769 - 14.868: 97.6204% ( 13) 00:06:58.268 14.868 - 14.966: 97.6612% ( 7) 00:06:58.268 14.966 - 15.065: 97.6786% ( 3) 00:06:58.268 15.065 - 15.163: 97.7252% ( 8) 00:06:58.268 15.163 - 15.262: 97.7717% ( 8) 00:06:58.268 15.262 - 15.360: 97.7892% ( 3) 00:06:58.268 15.360 - 15.458: 97.8066% ( 3) 00:06:58.268 15.557 - 15.655: 97.8357% ( 5) 00:06:58.268 15.655 - 15.754: 97.8590% ( 4) 00:06:58.268 15.754 - 15.852: 97.8822% ( 4) 00:06:58.268 15.852 - 15.951: 97.8997% ( 3) 00:06:58.269 15.951 - 16.049: 97.9230% ( 4) 00:06:58.269 16.049 - 16.148: 97.9579% ( 6) 00:06:58.269 16.148 - 16.246: 97.9637% ( 1) 00:06:58.269 16.246 - 16.345: 97.9986% ( 6) 00:06:58.269 16.345 - 16.443: 98.0161% ( 3) 00:06:58.269 16.443 - 16.542: 98.0335% ( 3) 00:06:58.269 16.542 - 16.640: 98.1091% ( 13) 00:06:58.269 16.640 - 16.738: 98.2371% ( 22) 00:06:58.269 16.738 - 16.837: 98.3128% ( 13) 00:06:58.269 16.837 - 16.935: 98.3826% ( 12) 00:06:58.269 16.935 - 17.034: 98.4640% ( 14) 00:06:58.269 17.034 - 17.132: 98.5048% ( 7) 00:06:58.269 17.132 - 17.231: 98.5862% ( 14) 00:06:58.269 17.231 - 17.329: 98.6968% ( 19) 00:06:58.269 17.329 - 17.428: 98.7899% ( 16) 00:06:58.269 17.428 - 17.526: 98.8655% ( 13) 00:06:58.269 17.526 - 17.625: 98.9237% ( 10) 00:06:58.269 17.625 - 17.723: 98.9528% ( 5) 00:06:58.269 17.723 - 17.822: 98.9877% ( 6) 00:06:58.269 17.822 - 17.920: 98.9993% ( 2) 00:06:58.269 17.920 - 18.018: 99.0226% ( 4) 00:06:58.269 18.018 - 18.117: 99.0284% ( 1) 00:06:58.269 18.117 - 18.215: 99.0633% ( 6) 00:06:58.269 18.215 - 18.314: 99.0808% ( 3) 00:06:58.269 18.314 - 18.412: 99.0924% ( 2) 00:06:58.269 18.412 - 18.511: 99.1273% ( 6) 00:06:58.269 18.609 - 18.708: 99.1331% ( 1) 00:06:58.269 18.708 - 18.806: 99.1564% ( 4) 00:06:58.269 18.806 - 18.905: 99.1680% ( 2) 00:06:58.269 18.905 - 19.003: 99.1797% ( 2) 00:06:58.269 19.200 - 19.298: 99.1913% ( 2) 00:06:58.269 19.397 - 19.495: 99.1971% ( 1) 00:06:58.269 19.495 - 19.594: 99.2088% ( 2) 00:06:58.269 19.594 - 19.692: 99.2262% ( 3) 00:06:58.269 19.889 - 19.988: 99.2320% ( 1) 00:06:58.269 19.988 - 20.086: 99.2437% ( 2) 00:06:58.269 20.086 - 20.185: 99.2495% ( 1) 00:06:58.269 20.185 - 20.283: 99.2553% ( 1) 00:06:58.269 20.283 - 20.382: 99.2611% ( 1) 00:06:58.269 20.382 - 20.480: 99.2727% ( 2) 00:06:58.269 20.578 - 20.677: 99.2786% ( 1) 00:06:58.269 21.071 - 21.169: 99.2902% ( 2) 00:06:58.269 21.169 - 21.268: 99.3077% ( 3) 00:06:58.269 21.366 - 21.465: 99.3135% ( 1) 00:06:58.269 21.465 - 21.563: 99.3251% ( 2) 00:06:58.269 21.957 - 22.055: 99.3309% ( 1) 00:06:58.269 22.548 - 22.646: 99.3426% ( 2) 00:06:58.269 23.434 - 23.532: 99.3484% ( 1) 00:06:58.269 24.320 - 24.418: 99.3542% ( 1) 00:06:58.269 24.418 - 24.517: 99.3600% ( 1) 00:06:58.269 25.600 - 25.797: 99.3658% ( 1) 00:06:58.269 25.994 - 26.191: 99.3775% ( 2) 00:06:58.269 27.372 - 27.569: 99.4298% ( 9) 00:06:58.269 27.569 - 27.766: 99.5578% ( 22) 00:06:58.269 27.766 - 27.963: 99.7033% ( 25) 00:06:58.269 27.963 - 28.160: 99.7906% ( 15) 00:06:58.269 28.160 - 28.357: 99.8255% ( 6) 00:06:58.269 28.357 - 28.554: 99.8604% ( 6) 00:06:58.269 28.554 - 28.751: 99.8662% ( 1) 00:06:58.269 28.751 - 28.948: 99.8895% ( 4) 00:06:58.269 28.948 - 29.145: 99.9069% ( 3) 00:06:58.269 29.145 - 29.342: 99.9127% ( 1) 00:06:58.269 30.917 - 31.114: 99.9185% ( 1) 00:06:58.269 32.886 - 33.083: 99.9244% ( 1) 00:06:58.269 33.674 - 33.871: 99.9302% ( 1) 00:06:58.269 34.265 - 34.462: 99.9360% ( 1) 00:06:58.269 35.643 - 35.840: 99.9418% ( 1) 00:06:58.269 36.234 - 36.431: 99.9476% ( 1) 00:06:58.269 41.551 - 41.748: 99.9535% ( 1) 00:06:58.269 46.277 - 46.474: 99.9651% ( 2) 00:06:58.269 46.868 - 47.065: 99.9709% ( 1) 00:06:58.269 47.852 - 48.049: 99.9767% ( 1) 00:06:58.269 49.625 - 49.822: 99.9825% ( 1) 00:06:58.269 57.502 - 57.895: 99.9884% ( 1) 00:06:58.269 110.277 - 111.065: 99.9942% ( 1) 00:06:58.269 215.828 - 217.403: 100.0000% ( 1) 00:06:58.269 00:06:58.269 Complete histogram 00:06:58.269 ================== 00:06:58.269 Range in us Cumulative Count 00:06:58.269 7.188 - 7.237: 0.1804% ( 31) 00:06:58.269 7.237 - 7.286: 2.0770% ( 326) 00:06:58.269 7.286 - 7.335: 11.3859% ( 1600) 00:06:58.269 7.335 - 7.385: 33.3779% ( 3780) 00:06:58.269 7.385 - 7.434: 59.6288% ( 4512) 00:06:58.269 7.434 - 7.483: 76.6523% ( 2926) 00:06:58.269 7.483 - 7.532: 84.4310% ( 1337) 00:06:58.269 7.532 - 7.582: 87.4913% ( 526) 00:06:58.269 7.582 - 7.631: 88.7189% ( 211) 00:06:58.269 7.631 - 7.680: 89.2425% ( 90) 00:06:58.269 7.680 - 7.729: 89.4927% ( 43) 00:06:58.269 7.729 - 7.778: 89.6090% ( 20) 00:06:58.269 7.778 - 7.828: 89.7138% ( 18) 00:06:58.269 7.828 - 7.877: 89.9407% ( 39) 00:06:58.269 7.877 - 7.926: 90.4526% ( 88) 00:06:58.269 7.926 - 7.975: 91.0519% ( 103) 00:06:58.269 7.975 - 8.025: 91.8315% ( 134) 00:06:58.269 8.025 - 8.074: 92.9835% ( 198) 00:06:58.269 8.074 - 8.123: 94.3740% ( 239) 00:06:58.269 8.123 - 8.172: 95.3340% ( 165) 00:06:58.269 8.172 - 8.222: 96.0205% ( 118) 00:06:58.269 8.222 - 8.271: 96.4685% ( 77) 00:06:58.269 8.271 - 8.320: 96.7768% ( 53) 00:06:58.269 8.320 - 8.369: 96.9165% ( 24) 00:06:58.269 8.369 - 8.418: 97.0154% ( 17) 00:06:58.269 8.418 - 8.468: 97.0735% ( 10) 00:06:58.269 8.468 - 8.517: 97.1084% ( 6) 00:06:58.269 8.517 - 8.566: 97.1492% ( 7) 00:06:58.269 8.566 - 8.615: 97.1724% ( 4) 00:06:58.269 8.615 - 8.665: 97.2306% ( 10) 00:06:58.269 8.665 - 8.714: 97.2364% ( 1) 00:06:58.269 8.714 - 8.763: 97.2481% ( 2) 00:06:58.269 8.763 - 8.812: 97.2539% ( 1) 00:06:58.269 8.862 - 8.911: 97.2597% ( 1) 00:06:58.269 8.911 - 8.960: 97.2714% ( 2) 00:06:58.269 8.960 - 9.009: 97.2830% ( 2) 00:06:58.269 9.009 - 9.058: 97.2946% ( 2) 00:06:58.269 9.108 - 9.157: 97.3063% ( 2) 00:06:58.269 9.157 - 9.206: 97.3179% ( 2) 00:06:58.269 9.255 - 9.305: 97.3295% ( 2) 00:06:58.269 9.305 - 9.354: 97.3412% ( 2) 00:06:58.269 9.403 - 9.452: 97.3528% ( 2) 00:06:58.269 9.452 - 9.502: 97.3644% ( 2) 00:06:58.269 9.502 - 9.551: 97.3819% ( 3) 00:06:58.269 9.551 - 9.600: 97.4052% ( 4) 00:06:58.269 9.600 - 9.649: 97.4168% ( 2) 00:06:58.269 9.698 - 9.748: 97.4226% ( 1) 00:06:58.269 9.748 - 9.797: 97.4401% ( 3) 00:06:58.269 9.797 - 9.846: 97.4517% ( 2) 00:06:58.269 9.846 - 9.895: 97.4750% ( 4) 00:06:58.269 9.895 - 9.945: 97.4808% ( 1) 00:06:58.269 9.945 - 9.994: 97.4866% ( 1) 00:06:58.269 9.994 - 10.043: 97.4983% ( 2) 00:06:58.269 10.043 - 10.092: 97.5332% ( 6) 00:06:58.269 10.092 - 10.142: 97.5623% ( 5) 00:06:58.269 10.142 - 10.191: 97.6030% ( 7) 00:06:58.269 10.191 - 10.240: 97.6088% ( 1) 00:06:58.269 10.240 - 10.289: 97.6263% ( 3) 00:06:58.269 10.289 - 10.338: 97.6437% ( 3) 00:06:58.269 10.338 - 10.388: 97.6612% ( 3) 00:06:58.269 10.388 - 10.437: 97.6786% ( 3) 00:06:58.269 10.437 - 10.486: 97.6844% ( 1) 00:06:58.269 10.486 - 10.535: 97.6961% ( 2) 00:06:58.269 10.535 - 10.585: 97.7077% ( 2) 00:06:58.269 10.585 - 10.634: 97.7135% ( 1) 00:06:58.269 10.634 - 10.683: 97.7252% ( 2) 00:06:58.269 10.732 - 10.782: 97.7310% ( 1) 00:06:58.269 10.782 - 10.831: 97.7426% ( 2) 00:06:58.269 10.831 - 10.880: 97.7601% ( 3) 00:06:58.269 10.880 - 10.929: 97.7659% ( 1) 00:06:58.269 10.978 - 11.028: 97.7717% ( 1) 00:06:58.269 11.028 - 11.077: 97.7833% ( 2) 00:06:58.269 11.323 - 11.372: 97.7892% ( 1) 00:06:58.269 11.372 - 11.422: 97.7950% ( 1) 00:06:58.269 11.471 - 11.520: 97.8008% ( 1) 00:06:58.269 11.569 - 11.618: 97.8066% ( 1) 00:06:58.269 11.618 - 11.668: 97.8124% ( 1) 00:06:58.269 11.914 - 11.963: 97.8241% ( 2) 00:06:58.269 11.963 - 12.012: 97.8299% ( 1) 00:06:58.269 12.012 - 12.062: 97.8415% ( 2) 00:06:58.269 12.062 - 12.111: 97.8473% ( 1) 00:06:58.269 12.111 - 12.160: 97.8648% ( 3) 00:06:58.269 12.209 - 12.258: 97.8706% ( 1) 00:06:58.269 12.258 - 12.308: 97.8764% ( 1) 00:06:58.269 12.308 - 12.357: 97.8997% ( 4) 00:06:58.269 12.406 - 12.455: 97.9113% ( 2) 00:06:58.269 12.455 - 12.505: 97.9230% ( 2) 00:06:58.269 12.505 - 12.554: 97.9288% ( 1) 00:06:58.269 12.554 - 12.603: 97.9346% ( 1) 00:06:58.269 12.603 - 12.702: 97.9637% ( 5) 00:06:58.269 12.702 - 12.800: 97.9811% ( 3) 00:06:58.269 12.800 - 12.898: 97.9870% ( 1) 00:06:58.269 12.898 - 12.997: 98.0335% ( 8) 00:06:58.269 12.997 - 13.095: 98.0859% ( 9) 00:06:58.269 13.095 - 13.194: 98.1441% ( 10) 00:06:58.269 13.194 - 13.292: 98.1848% ( 7) 00:06:58.269 13.292 - 13.391: 98.2779% ( 16) 00:06:58.269 13.391 - 13.489: 98.3535% ( 13) 00:06:58.269 13.489 - 13.588: 98.4291% ( 13) 00:06:58.269 13.588 - 13.686: 98.5048% ( 13) 00:06:58.269 13.686 - 13.785: 98.6095% ( 18) 00:06:58.269 13.785 - 13.883: 98.6909% ( 14) 00:06:58.269 13.883 - 13.982: 98.7782% ( 15) 00:06:58.269 13.982 - 14.080: 98.8189% ( 7) 00:06:58.269 14.080 - 14.178: 98.8771% ( 10) 00:06:58.269 14.178 - 14.277: 98.9178% ( 7) 00:06:58.269 14.277 - 14.375: 98.9295% ( 2) 00:06:58.269 14.375 - 14.474: 98.9644% ( 6) 00:06:58.269 14.474 - 14.572: 99.0226% ( 10) 00:06:58.269 14.572 - 14.671: 99.0400% ( 3) 00:06:58.269 14.671 - 14.769: 99.0517% ( 2) 00:06:58.269 14.769 - 14.868: 99.0982% ( 8) 00:06:58.269 14.868 - 14.966: 99.1331% ( 6) 00:06:58.269 14.966 - 15.065: 99.1506% ( 3) 00:06:58.269 15.065 - 15.163: 99.1738% ( 4) 00:06:58.269 15.163 - 15.262: 99.1971% ( 4) 00:06:58.269 15.262 - 15.360: 99.2088% ( 2) 00:06:58.269 15.458 - 15.557: 99.2204% ( 2) 00:06:58.269 15.557 - 15.655: 99.2262% ( 1) 00:06:58.269 15.655 - 15.754: 99.2320% ( 1) 00:06:58.269 15.754 - 15.852: 99.2378% ( 1) 00:06:58.269 16.443 - 16.542: 99.2437% ( 1) 00:06:58.269 16.738 - 16.837: 99.2495% ( 1) 00:06:58.269 16.935 - 17.034: 99.2553% ( 1) 00:06:58.269 17.034 - 17.132: 99.2669% ( 2) 00:06:58.269 17.231 - 17.329: 99.2727% ( 1) 00:06:58.269 17.329 - 17.428: 99.2786% ( 1) 00:06:58.269 17.428 - 17.526: 99.2844% ( 1) 00:06:58.269 17.723 - 17.822: 99.2902% ( 1) 00:06:58.269 17.822 - 17.920: 99.2960% ( 1) 00:06:58.269 17.920 - 18.018: 99.3018% ( 1) 00:06:58.269 18.215 - 18.314: 99.3077% ( 1) 00:06:58.269 18.708 - 18.806: 99.3135% ( 1) 00:06:58.269 18.806 - 18.905: 99.3251% ( 2) 00:06:58.269 19.003 - 19.102: 99.3309% ( 1) 00:06:58.269 19.495 - 19.594: 99.3542% ( 4) 00:06:58.269 19.594 - 19.692: 99.4531% ( 17) 00:06:58.269 19.692 - 19.791: 99.5753% ( 21) 00:06:58.269 19.791 - 19.889: 99.6916% ( 20) 00:06:58.269 19.889 - 19.988: 99.7906% ( 17) 00:06:58.270 19.988 - 20.086: 99.8255% ( 6) 00:06:58.270 20.086 - 20.185: 99.8371% ( 2) 00:06:58.270 20.185 - 20.283: 99.8429% ( 1) 00:06:58.270 20.283 - 20.382: 99.8720% ( 5) 00:06:58.270 20.382 - 20.480: 99.8778% ( 1) 00:06:58.270 20.775 - 20.874: 99.8895% ( 2) 00:06:58.270 21.268 - 21.366: 99.8953% ( 1) 00:06:58.270 22.351 - 22.449: 99.9011% ( 1) 00:06:58.270 23.729 - 23.828: 99.9069% ( 1) 00:06:58.270 23.828 - 23.926: 99.9127% ( 1) 00:06:58.270 24.222 - 24.320: 99.9185% ( 1) 00:06:58.270 25.206 - 25.403: 99.9244% ( 1) 00:06:58.270 25.994 - 26.191: 99.9302% ( 1) 00:06:58.270 27.175 - 27.372: 99.9360% ( 1) 00:06:58.270 27.766 - 27.963: 99.9418% ( 1) 00:06:58.270 32.689 - 32.886: 99.9476% ( 1) 00:06:58.270 34.265 - 34.462: 99.9535% ( 1) 00:06:58.270 36.234 - 36.431: 99.9593% ( 1) 00:06:58.270 43.126 - 43.323: 99.9651% ( 1) 00:06:58.270 53.563 - 53.957: 99.9709% ( 1) 00:06:58.270 62.228 - 62.622: 99.9767% ( 1) 00:06:58.270 67.742 - 68.135: 99.9825% ( 1) 00:06:58.270 71.286 - 71.680: 99.9884% ( 1) 00:06:58.270 79.951 - 80.345: 99.9942% ( 1) 00:06:58.270 125.243 - 126.031: 100.0000% ( 1) 00:06:58.270 00:06:58.270 00:06:58.270 real 0m1.207s 00:06:58.270 user 0m1.061s 00:06:58.270 sys 0m0.098s 00:06:58.270 17:10:40 nvme.nvme_overhead -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:58.270 17:10:40 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:06:58.270 ************************************ 00:06:58.270 END TEST nvme_overhead 00:06:58.270 ************************************ 00:06:58.270 17:10:40 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:06:58.270 17:10:40 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:58.270 17:10:40 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:58.270 17:10:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.270 ************************************ 00:06:58.270 START TEST nvme_arbitration 00:06:58.270 ************************************ 00:06:58.270 17:10:40 nvme.nvme_arbitration -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:01.563 Initializing NVMe Controllers 00:07:01.563 Attached to 0000:00:10.0 00:07:01.563 Attached to 0000:00:11.0 00:07:01.563 Attached to 0000:00:13.0 00:07:01.563 Attached to 0000:00:12.0 00:07:01.563 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:01.563 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:01.563 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:01.563 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:01.563 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:01.563 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:01.563 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:01.563 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:01.563 Initialization complete. Launching workers. 00:07:01.563 Starting thread on core 1 with urgent priority queue 00:07:01.563 Starting thread on core 2 with urgent priority queue 00:07:01.563 Starting thread on core 3 with urgent priority queue 00:07:01.563 Starting thread on core 0 with urgent priority queue 00:07:01.563 QEMU NVMe Ctrl (12340 ) core 0: 853.33 IO/s 117.19 secs/100000 ios 00:07:01.563 QEMU NVMe Ctrl (12342 ) core 0: 853.33 IO/s 117.19 secs/100000 ios 00:07:01.563 QEMU NVMe Ctrl (12341 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:07:01.563 QEMU NVMe Ctrl (12342 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:07:01.563 QEMU NVMe Ctrl (12343 ) core 2: 1024.00 IO/s 97.66 secs/100000 ios 00:07:01.563 QEMU NVMe Ctrl (12342 ) core 3: 1088.00 IO/s 91.91 secs/100000 ios 00:07:01.563 ======================================================== 00:07:01.563 00:07:01.563 00:07:01.563 real 0m3.290s 00:07:01.563 user 0m9.186s 00:07:01.563 sys 0m0.111s 00:07:01.563 17:10:44 nvme.nvme_arbitration -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.563 17:10:44 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:01.563 ************************************ 00:07:01.563 END TEST nvme_arbitration 00:07:01.563 ************************************ 00:07:01.563 17:10:44 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:01.563 17:10:44 nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:01.564 17:10:44 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.564 17:10:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.564 ************************************ 00:07:01.564 START TEST nvme_single_aen 00:07:01.564 ************************************ 00:07:01.564 17:10:44 nvme.nvme_single_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:01.564 Asynchronous Event Request test 00:07:01.564 Attached to 0000:00:10.0 00:07:01.564 Attached to 0000:00:11.0 00:07:01.564 Attached to 0000:00:13.0 00:07:01.564 Attached to 0000:00:12.0 00:07:01.564 Reset controller to setup AER completions for this process 00:07:01.564 Registering asynchronous event callbacks... 00:07:01.564 Getting orig temperature thresholds of all controllers 00:07:01.564 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:01.564 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:01.564 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:01.564 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:01.564 Setting all controllers temperature threshold low to trigger AER 00:07:01.564 Waiting for all controllers temperature threshold to be set lower 00:07:01.564 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:01.564 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:01.564 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:01.564 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:01.564 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:01.564 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:01.564 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:01.564 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:01.564 Waiting for all controllers to trigger AER and reset threshold 00:07:01.564 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.564 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.564 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.564 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.564 Cleaning up... 00:07:01.564 00:07:01.564 real 0m0.213s 00:07:01.564 user 0m0.078s 00:07:01.564 sys 0m0.089s 00:07:01.564 17:10:44 nvme.nvme_single_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.564 17:10:44 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:01.564 ************************************ 00:07:01.564 END TEST nvme_single_aen 00:07:01.564 ************************************ 00:07:01.564 17:10:44 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:01.564 17:10:44 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:01.564 17:10:44 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.564 17:10:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.564 ************************************ 00:07:01.564 START TEST nvme_doorbell_aers 00:07:01.564 ************************************ 00:07:01.564 17:10:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1127 -- # nvme_doorbell_aers 00:07:01.564 17:10:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:01.564 17:10:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:01.564 17:10:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:01.564 17:10:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:01.564 17:10:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:01.564 17:10:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:07:01.564 17:10:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:01.564 17:10:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:01.564 17:10:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:01.564 17:10:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:01.564 17:10:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:01.564 17:10:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:01.564 17:10:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:01.821 [2024-10-30 17:10:44.713955] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63061) is not found. Dropping the request. 00:07:11.781 Executing: test_write_invalid_db 00:07:11.781 Waiting for AER completion... 00:07:11.781 Failure: test_write_invalid_db 00:07:11.781 00:07:11.781 Executing: test_invalid_db_write_overflow_sq 00:07:11.781 Waiting for AER completion... 00:07:11.781 Failure: test_invalid_db_write_overflow_sq 00:07:11.781 00:07:11.781 Executing: test_invalid_db_write_overflow_cq 00:07:11.781 Waiting for AER completion... 00:07:11.781 Failure: test_invalid_db_write_overflow_cq 00:07:11.781 00:07:11.781 17:10:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:11.781 17:10:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:12.039 [2024-10-30 17:10:54.762635] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63061) is not found. Dropping the request. 00:07:22.042 Executing: test_write_invalid_db 00:07:22.042 Waiting for AER completion... 00:07:22.042 Failure: test_write_invalid_db 00:07:22.042 00:07:22.042 Executing: test_invalid_db_write_overflow_sq 00:07:22.042 Waiting for AER completion... 00:07:22.042 Failure: test_invalid_db_write_overflow_sq 00:07:22.042 00:07:22.042 Executing: test_invalid_db_write_overflow_cq 00:07:22.042 Waiting for AER completion... 00:07:22.042 Failure: test_invalid_db_write_overflow_cq 00:07:22.042 00:07:22.042 17:11:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:22.042 17:11:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:07:22.042 [2024-10-30 17:11:04.786098] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63061) is not found. Dropping the request. 00:07:32.008 Executing: test_write_invalid_db 00:07:32.008 Waiting for AER completion... 00:07:32.008 Failure: test_write_invalid_db 00:07:32.008 00:07:32.008 Executing: test_invalid_db_write_overflow_sq 00:07:32.008 Waiting for AER completion... 00:07:32.008 Failure: test_invalid_db_write_overflow_sq 00:07:32.008 00:07:32.008 Executing: test_invalid_db_write_overflow_cq 00:07:32.008 Waiting for AER completion... 00:07:32.008 Failure: test_invalid_db_write_overflow_cq 00:07:32.008 00:07:32.008 17:11:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:32.008 17:11:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:07:32.009 [2024-10-30 17:11:14.819049] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63061) is not found. Dropping the request. 00:07:41.989 Executing: test_write_invalid_db 00:07:41.989 Waiting for AER completion... 00:07:41.989 Failure: test_write_invalid_db 00:07:41.989 00:07:41.989 Executing: test_invalid_db_write_overflow_sq 00:07:41.989 Waiting for AER completion... 00:07:41.989 Failure: test_invalid_db_write_overflow_sq 00:07:41.989 00:07:41.989 Executing: test_invalid_db_write_overflow_cq 00:07:41.989 Waiting for AER completion... 00:07:41.989 Failure: test_invalid_db_write_overflow_cq 00:07:41.989 00:07:41.989 00:07:41.989 real 0m40.180s 00:07:41.989 user 0m34.154s 00:07:41.989 sys 0m5.648s 00:07:41.989 17:11:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:41.989 17:11:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:07:41.989 ************************************ 00:07:41.989 END TEST nvme_doorbell_aers 00:07:41.989 ************************************ 00:07:41.989 17:11:24 nvme -- nvme/nvme.sh@97 -- # uname 00:07:41.989 17:11:24 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:07:41.989 17:11:24 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:07:41.989 17:11:24 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:07:41.989 17:11:24 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:41.989 17:11:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.989 ************************************ 00:07:41.989 START TEST nvme_multi_aen 00:07:41.989 ************************************ 00:07:41.989 17:11:24 nvme.nvme_multi_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:07:41.989 [2024-10-30 17:11:24.869150] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63061) is not found. Dropping the request. 00:07:41.989 [2024-10-30 17:11:24.869546] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63061) is not found. Dropping the request. 00:07:41.989 [2024-10-30 17:11:24.869594] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63061) is not found. Dropping the request. 00:07:41.989 [2024-10-30 17:11:24.870604] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63061) is not found. Dropping the request. 00:07:41.989 [2024-10-30 17:11:24.870682] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63061) is not found. Dropping the request. 00:07:41.989 [2024-10-30 17:11:24.870719] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63061) is not found. Dropping the request. 00:07:41.989 [2024-10-30 17:11:24.871451] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63061) is not found. Dropping the request. 00:07:41.989 [2024-10-30 17:11:24.871514] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63061) is not found. Dropping the request. 00:07:41.989 [2024-10-30 17:11:24.871545] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63061) is not found. Dropping the request. 00:07:41.989 [2024-10-30 17:11:24.872322] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63061) is not found. Dropping the request. 00:07:41.989 [2024-10-30 17:11:24.872393] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63061) is not found. Dropping the request. 00:07:41.989 [2024-10-30 17:11:24.872424] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63061) is not found. Dropping the request. 00:07:41.989 Child process pid: 63582 00:07:42.247 [Child] Asynchronous Event Request test 00:07:42.247 [Child] Attached to 0000:00:10.0 00:07:42.247 [Child] Attached to 0000:00:11.0 00:07:42.247 [Child] Attached to 0000:00:13.0 00:07:42.247 [Child] Attached to 0000:00:12.0 00:07:42.247 [Child] Registering asynchronous event callbacks... 00:07:42.247 [Child] Getting orig temperature thresholds of all controllers 00:07:42.247 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:42.247 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:42.247 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:42.247 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:42.247 [Child] Waiting for all controllers to trigger AER and reset threshold 00:07:42.247 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:42.247 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:42.247 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:42.247 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:42.247 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.247 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.247 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.247 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.247 [Child] Cleaning up... 00:07:42.247 Asynchronous Event Request test 00:07:42.247 Attached to 0000:00:10.0 00:07:42.247 Attached to 0000:00:11.0 00:07:42.247 Attached to 0000:00:13.0 00:07:42.247 Attached to 0000:00:12.0 00:07:42.247 Reset controller to setup AER completions for this process 00:07:42.247 Registering asynchronous event callbacks... 00:07:42.247 Getting orig temperature thresholds of all controllers 00:07:42.247 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:42.247 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:42.247 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:42.247 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:42.247 Setting all controllers temperature threshold low to trigger AER 00:07:42.248 Waiting for all controllers temperature threshold to be set lower 00:07:42.248 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:42.248 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:42.248 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:42.248 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:42.248 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:42.248 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:42.248 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:42.248 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:42.248 Waiting for all controllers to trigger AER and reset threshold 00:07:42.248 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.248 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.248 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.248 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.248 Cleaning up... 00:07:42.248 00:07:42.248 real 0m0.431s 00:07:42.248 user 0m0.142s 00:07:42.248 sys 0m0.176s 00:07:42.248 17:11:25 nvme.nvme_multi_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:42.248 17:11:25 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:07:42.248 ************************************ 00:07:42.248 END TEST nvme_multi_aen 00:07:42.248 ************************************ 00:07:42.248 17:11:25 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:07:42.248 17:11:25 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:42.248 17:11:25 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:42.248 17:11:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.248 ************************************ 00:07:42.248 START TEST nvme_startup 00:07:42.248 ************************************ 00:07:42.248 17:11:25 nvme.nvme_startup -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:07:42.506 Initializing NVMe Controllers 00:07:42.506 Attached to 0000:00:10.0 00:07:42.506 Attached to 0000:00:11.0 00:07:42.506 Attached to 0000:00:13.0 00:07:42.506 Attached to 0000:00:12.0 00:07:42.506 Initialization complete. 00:07:42.506 Time used:139693.812 (us). 00:07:42.506 00:07:42.506 real 0m0.202s 00:07:42.506 user 0m0.074s 00:07:42.506 sys 0m0.083s 00:07:42.506 17:11:25 nvme.nvme_startup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:42.506 17:11:25 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:07:42.506 ************************************ 00:07:42.506 END TEST nvme_startup 00:07:42.506 ************************************ 00:07:42.506 17:11:25 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:07:42.506 17:11:25 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:42.506 17:11:25 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:42.506 17:11:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.506 ************************************ 00:07:42.506 START TEST nvme_multi_secondary 00:07:42.506 ************************************ 00:07:42.506 17:11:25 nvme.nvme_multi_secondary -- common/autotest_common.sh@1127 -- # nvme_multi_secondary 00:07:42.506 17:11:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63638 00:07:42.506 17:11:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:07:42.506 17:11:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63639 00:07:42.506 17:11:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:07:42.506 17:11:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:07:45.787 Initializing NVMe Controllers 00:07:45.787 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:45.787 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:45.787 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:45.787 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:45.787 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:07:45.787 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:07:45.787 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:07:45.787 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:07:45.787 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:07:45.787 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:07:45.787 Initialization complete. Launching workers. 00:07:45.787 ======================================================== 00:07:45.787 Latency(us) 00:07:45.787 Device Information : IOPS MiB/s Average min max 00:07:45.787 PCIE (0000:00:10.0) NSID 1 from core 2: 3162.41 12.35 5058.16 903.00 13258.77 00:07:45.787 PCIE (0000:00:11.0) NSID 1 from core 2: 3162.41 12.35 5059.01 890.09 13033.83 00:07:45.787 PCIE (0000:00:13.0) NSID 1 from core 2: 3162.41 12.35 5059.34 889.37 13319.98 00:07:45.787 PCIE (0000:00:12.0) NSID 1 from core 2: 3162.41 12.35 5059.48 921.12 13423.95 00:07:45.787 PCIE (0000:00:12.0) NSID 2 from core 2: 3162.41 12.35 5059.39 909.08 13509.82 00:07:45.787 PCIE (0000:00:12.0) NSID 3 from core 2: 3162.41 12.35 5059.77 901.11 13630.31 00:07:45.787 ======================================================== 00:07:45.787 Total : 18974.48 74.12 5059.19 889.37 13630.31 00:07:45.787 00:07:45.787 Initializing NVMe Controllers 00:07:45.787 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:45.787 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:45.787 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:45.787 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:45.787 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:07:45.787 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:07:45.787 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:07:45.788 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:07:45.788 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:07:45.788 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:07:45.788 Initialization complete. Launching workers. 00:07:45.788 ======================================================== 00:07:45.788 Latency(us) 00:07:45.788 Device Information : IOPS MiB/s Average min max 00:07:45.788 PCIE (0000:00:10.0) NSID 1 from core 1: 8002.64 31.26 1998.01 718.85 5822.12 00:07:45.788 PCIE (0000:00:11.0) NSID 1 from core 1: 8002.64 31.26 1998.94 736.46 6864.94 00:07:45.788 PCIE (0000:00:13.0) NSID 1 from core 1: 8002.64 31.26 1998.89 745.65 6356.75 00:07:45.788 PCIE (0000:00:12.0) NSID 1 from core 1: 8002.64 31.26 1998.93 736.60 5968.86 00:07:45.788 PCIE (0000:00:12.0) NSID 2 from core 1: 8002.64 31.26 1998.91 737.30 5673.57 00:07:45.788 PCIE (0000:00:12.0) NSID 3 from core 1: 8002.64 31.26 1998.88 736.50 5793.01 00:07:45.788 ======================================================== 00:07:45.788 Total : 48015.85 187.56 1998.76 718.85 6864.94 00:07:45.788 00:07:45.788 17:11:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63638 00:07:48.318 Initializing NVMe Controllers 00:07:48.318 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:48.318 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:48.318 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:48.318 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:48.318 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:48.318 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:48.318 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:48.318 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:48.318 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:48.318 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:48.318 Initialization complete. Launching workers. 00:07:48.318 ======================================================== 00:07:48.318 Latency(us) 00:07:48.318 Device Information : IOPS MiB/s Average min max 00:07:48.318 PCIE (0000:00:10.0) NSID 1 from core 0: 11139.18 43.51 1435.15 685.47 5560.56 00:07:48.318 PCIE (0000:00:11.0) NSID 1 from core 0: 11139.18 43.51 1435.97 707.69 6407.23 00:07:48.318 PCIE (0000:00:13.0) NSID 1 from core 0: 11139.18 43.51 1435.95 635.18 7805.15 00:07:48.318 PCIE (0000:00:12.0) NSID 1 from core 0: 11139.18 43.51 1435.92 620.14 7935.39 00:07:48.318 PCIE (0000:00:12.0) NSID 2 from core 0: 11139.18 43.51 1435.90 592.36 7346.73 00:07:48.318 PCIE (0000:00:12.0) NSID 3 from core 0: 11139.18 43.51 1435.88 574.26 6407.60 00:07:48.318 ======================================================== 00:07:48.318 Total : 66835.11 261.07 1435.80 574.26 7935.39 00:07:48.318 00:07:48.318 17:11:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63639 00:07:48.318 17:11:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63708 00:07:48.318 17:11:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:07:48.318 17:11:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63709 00:07:48.318 17:11:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:07:48.318 17:11:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:07:51.604 Initializing NVMe Controllers 00:07:51.604 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:51.604 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:51.604 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:51.604 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:51.604 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:51.604 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:51.604 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:51.604 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:51.604 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:51.604 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:51.604 Initialization complete. Launching workers. 00:07:51.604 ======================================================== 00:07:51.604 Latency(us) 00:07:51.604 Device Information : IOPS MiB/s Average min max 00:07:51.604 PCIE (0000:00:10.0) NSID 1 from core 0: 8107.65 31.67 1972.09 703.30 6418.75 00:07:51.604 PCIE (0000:00:11.0) NSID 1 from core 0: 8107.65 31.67 1973.21 726.74 5825.35 00:07:51.604 PCIE (0000:00:13.0) NSID 1 from core 0: 8107.65 31.67 1973.23 725.51 6216.30 00:07:51.604 PCIE (0000:00:12.0) NSID 1 from core 0: 8107.65 31.67 1973.39 720.34 6112.36 00:07:51.604 PCIE (0000:00:12.0) NSID 2 from core 0: 8107.65 31.67 1973.53 729.08 6829.68 00:07:51.604 PCIE (0000:00:12.0) NSID 3 from core 0: 8107.65 31.67 1973.69 727.58 7427.19 00:07:51.604 ======================================================== 00:07:51.604 Total : 48645.91 190.02 1973.19 703.30 7427.19 00:07:51.604 00:07:51.604 Initializing NVMe Controllers 00:07:51.604 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:51.604 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:51.604 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:51.604 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:51.604 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:07:51.604 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:07:51.604 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:07:51.604 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:07:51.604 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:07:51.604 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:07:51.604 Initialization complete. Launching workers. 00:07:51.604 ======================================================== 00:07:51.604 Latency(us) 00:07:51.604 Device Information : IOPS MiB/s Average min max 00:07:51.604 PCIE (0000:00:10.0) NSID 1 from core 1: 8143.20 31.81 1963.51 702.54 5587.26 00:07:51.604 PCIE (0000:00:11.0) NSID 1 from core 1: 8143.20 31.81 1964.35 711.84 5419.76 00:07:51.604 PCIE (0000:00:13.0) NSID 1 from core 1: 8143.20 31.81 1964.30 719.27 5419.75 00:07:51.604 PCIE (0000:00:12.0) NSID 1 from core 1: 8143.20 31.81 1964.25 629.37 5526.90 00:07:51.604 PCIE (0000:00:12.0) NSID 2 from core 1: 8143.20 31.81 1964.19 606.96 5701.89 00:07:51.604 PCIE (0000:00:12.0) NSID 3 from core 1: 8143.20 31.81 1964.14 581.52 5694.31 00:07:51.604 ======================================================== 00:07:51.604 Total : 48859.21 190.86 1964.12 581.52 5701.89 00:07:51.604 00:07:53.591 Initializing NVMe Controllers 00:07:53.591 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:53.591 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:53.591 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:53.591 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:53.591 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:07:53.591 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:07:53.591 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:07:53.591 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:07:53.591 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:07:53.591 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:07:53.591 Initialization complete. Launching workers. 00:07:53.591 ======================================================== 00:07:53.591 Latency(us) 00:07:53.591 Device Information : IOPS MiB/s Average min max 00:07:53.592 PCIE (0000:00:10.0) NSID 1 from core 2: 4710.60 18.40 3394.95 741.19 13720.07 00:07:53.592 PCIE (0000:00:11.0) NSID 1 from core 2: 4710.60 18.40 3396.24 713.02 13231.15 00:07:53.592 PCIE (0000:00:13.0) NSID 1 from core 2: 4710.60 18.40 3396.02 746.08 12706.83 00:07:53.592 PCIE (0000:00:12.0) NSID 1 from core 2: 4710.60 18.40 3395.79 754.13 12608.82 00:07:53.592 PCIE (0000:00:12.0) NSID 2 from core 2: 4710.60 18.40 3395.74 738.30 13232.56 00:07:53.592 PCIE (0000:00:12.0) NSID 3 from core 2: 4710.60 18.40 3395.86 672.59 12745.24 00:07:53.592 ======================================================== 00:07:53.592 Total : 28263.61 110.40 3395.77 672.59 13720.07 00:07:53.592 00:07:53.592 17:11:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63708 00:07:53.592 17:11:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63709 00:07:53.592 00:07:53.592 real 0m10.728s 00:07:53.592 user 0m18.383s 00:07:53.592 sys 0m0.605s 00:07:53.592 17:11:36 nvme.nvme_multi_secondary -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:53.592 ************************************ 00:07:53.592 END TEST nvme_multi_secondary 00:07:53.592 17:11:36 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:07:53.592 ************************************ 00:07:53.592 17:11:36 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:07:53.592 17:11:36 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:07:53.592 17:11:36 nvme -- common/autotest_common.sh@1091 -- # [[ -e /proc/62671 ]] 00:07:53.592 17:11:36 nvme -- common/autotest_common.sh@1092 -- # kill 62671 00:07:53.592 17:11:36 nvme -- common/autotest_common.sh@1093 -- # wait 62671 00:07:53.592 [2024-10-30 17:11:36.153672] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63581) is not found. Dropping the request. 00:07:53.592 [2024-10-30 17:11:36.153748] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63581) is not found. Dropping the request. 00:07:53.592 [2024-10-30 17:11:36.153777] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63581) is not found. Dropping the request. 00:07:53.592 [2024-10-30 17:11:36.153796] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63581) is not found. Dropping the request. 00:07:53.592 [2024-10-30 17:11:36.156275] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63581) is not found. Dropping the request. 00:07:53.592 [2024-10-30 17:11:36.156330] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63581) is not found. Dropping the request. 00:07:53.592 [2024-10-30 17:11:36.156348] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63581) is not found. Dropping the request. 00:07:53.592 [2024-10-30 17:11:36.156366] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63581) is not found. Dropping the request. 00:07:53.592 [2024-10-30 17:11:36.158532] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63581) is not found. Dropping the request. 00:07:53.592 [2024-10-30 17:11:36.158567] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63581) is not found. Dropping the request. 00:07:53.592 [2024-10-30 17:11:36.158578] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63581) is not found. Dropping the request. 00:07:53.592 [2024-10-30 17:11:36.158589] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63581) is not found. Dropping the request. 00:07:53.592 [2024-10-30 17:11:36.159961] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63581) is not found. Dropping the request. 00:07:53.592 [2024-10-30 17:11:36.159996] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63581) is not found. Dropping the request. 00:07:53.592 [2024-10-30 17:11:36.160006] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63581) is not found. Dropping the request. 00:07:53.592 [2024-10-30 17:11:36.160018] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63581) is not found. Dropping the request. 00:07:53.592 17:11:36 nvme -- common/autotest_common.sh@1095 -- # rm -f /var/run/spdk_stub0 00:07:53.592 17:11:36 nvme -- common/autotest_common.sh@1099 -- # echo 2 00:07:53.592 17:11:36 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:07:53.592 17:11:36 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:53.592 17:11:36 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:53.592 17:11:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.592 ************************************ 00:07:53.592 START TEST bdev_nvme_reset_stuck_adm_cmd 00:07:53.592 ************************************ 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:07:53.592 * Looking for test storage... 00:07:53.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:53.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.592 --rc genhtml_branch_coverage=1 00:07:53.592 --rc genhtml_function_coverage=1 00:07:53.592 --rc genhtml_legend=1 00:07:53.592 --rc geninfo_all_blocks=1 00:07:53.592 --rc geninfo_unexecuted_blocks=1 00:07:53.592 00:07:53.592 ' 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:53.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.592 --rc genhtml_branch_coverage=1 00:07:53.592 --rc genhtml_function_coverage=1 00:07:53.592 --rc genhtml_legend=1 00:07:53.592 --rc geninfo_all_blocks=1 00:07:53.592 --rc geninfo_unexecuted_blocks=1 00:07:53.592 00:07:53.592 ' 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:53.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.592 --rc genhtml_branch_coverage=1 00:07:53.592 --rc genhtml_function_coverage=1 00:07:53.592 --rc genhtml_legend=1 00:07:53.592 --rc geninfo_all_blocks=1 00:07:53.592 --rc geninfo_unexecuted_blocks=1 00:07:53.592 00:07:53.592 ' 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:53.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.592 --rc genhtml_branch_coverage=1 00:07:53.592 --rc genhtml_function_coverage=1 00:07:53.592 --rc genhtml_legend=1 00:07:53.592 --rc geninfo_all_blocks=1 00:07:53.592 --rc geninfo_unexecuted_blocks=1 00:07:53.592 00:07:53.592 ' 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:53.592 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:53.593 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:53.593 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:53.593 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:53.593 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:07:53.593 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:07:53.593 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:07:53.593 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=63876 00:07:53.593 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:53.593 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 63876 00:07:53.593 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # '[' -z 63876 ']' 00:07:53.593 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.593 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:53.593 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.593 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:53.593 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:07:53.593 17:11:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:07:53.593 [2024-10-30 17:11:36.571470] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:07:53.593 [2024-10-30 17:11:36.571576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63876 ] 00:07:53.854 [2024-10-30 17:11:36.729296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.854 [2024-10-30 17:11:36.814807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.854 [2024-10-30 17:11:36.815106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.854 [2024-10-30 17:11:36.815116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.854 [2024-10-30 17:11:36.815119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.798 17:11:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:54.798 17:11:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@866 -- # return 0 00:07:54.798 17:11:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:07:54.798 17:11:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.798 17:11:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:07:54.798 nvme0n1 00:07:54.798 17:11:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.798 17:11:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:07:54.798 17:11:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_nKRv7.txt 00:07:54.798 17:11:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:07:54.798 17:11:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.798 17:11:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:07:54.798 true 00:07:54.798 17:11:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.798 17:11:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:07:54.798 17:11:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1730308297 00:07:54.798 17:11:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=63899 00:07:54.798 17:11:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:54.798 17:11:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:07:54.798 17:11:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:07:56.706 [2024-10-30 17:11:39.493443] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:56.706 [2024-10-30 17:11:39.493961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:07:56.706 [2024-10-30 17:11:39.494001] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:07:56.706 [2024-10-30 17:11:39.494012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.706 [2024-10-30 17:11:39.495345] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.706 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 63899 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 63899 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 63899 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_nKRv7.txt 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_nKRv7.txt 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 63876 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # '[' -z 63876 ']' 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # kill -0 63876 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # uname 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63876 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63876' 00:07:56.706 killing process with pid 63876 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@971 -- # kill 63876 00:07:56.706 17:11:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@976 -- # wait 63876 00:07:58.089 17:11:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:07:58.089 17:11:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:07:58.089 ************************************ 00:07:58.089 END TEST bdev_nvme_reset_stuck_adm_cmd 00:07:58.089 ************************************ 00:07:58.089 00:07:58.089 real 0m4.484s 00:07:58.089 user 0m15.967s 00:07:58.089 sys 0m0.505s 00:07:58.089 17:11:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.089 17:11:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:07:58.089 17:11:40 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:07:58.089 17:11:40 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:07:58.089 17:11:40 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:58.089 17:11:40 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.089 17:11:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.089 ************************************ 00:07:58.089 START TEST nvme_fio 00:07:58.089 ************************************ 00:07:58.089 17:11:40 nvme.nvme_fio -- common/autotest_common.sh@1127 -- # nvme_fio_test 00:07:58.089 17:11:40 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:07:58.089 17:11:40 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:07:58.089 17:11:40 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:07:58.089 17:11:40 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:58.089 17:11:40 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:07:58.089 17:11:40 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:58.089 17:11:40 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:58.089 17:11:40 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:58.089 17:11:40 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:58.089 17:11:40 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:58.089 17:11:40 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:07:58.089 17:11:40 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:07:58.089 17:11:40 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:07:58.089 17:11:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:58.089 17:11:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:07:58.348 17:11:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:58.348 17:11:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:07:58.348 17:11:41 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:07:58.348 17:11:41 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:07:58.348 17:11:41 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:07:58.348 17:11:41 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:07:58.348 17:11:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:07:58.348 17:11:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:07:58.348 17:11:41 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:07:58.348 17:11:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:07:58.348 17:11:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:07:58.348 17:11:41 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:07:58.348 17:11:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:07:58.348 17:11:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:07:58.348 17:11:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:07:58.348 17:11:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:07:58.348 17:11:41 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:07:58.348 17:11:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:07:58.348 17:11:41 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:07:58.348 17:11:41 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:07:58.608 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:07:58.608 fio-3.35 00:07:58.608 Starting 1 thread 00:08:06.748 00:08:06.748 test: (groupid=0, jobs=1): err= 0: pid=64035: Wed Oct 30 17:11:48 2024 00:08:06.748 read: IOPS=20.7k, BW=81.0MiB/s (84.9MB/s)(162MiB/2001msec) 00:08:06.748 slat (nsec): min=3431, max=70442, avg=5291.11, stdev=2248.93 00:08:06.748 clat (usec): min=594, max=10062, avg=3064.95, stdev=929.14 00:08:06.748 lat (usec): min=606, max=10120, avg=3070.24, stdev=930.11 00:08:06.748 clat percentiles (usec): 00:08:06.748 | 1.00th=[ 1975], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2474], 00:08:06.748 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2868], 00:08:06.748 | 70.00th=[ 3032], 80.00th=[ 3425], 90.00th=[ 4424], 95.00th=[ 5211], 00:08:06.748 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 7635], 99.95th=[ 7898], 00:08:06.748 | 99.99th=[ 9896] 00:08:06.748 bw ( KiB/s): min=75672, max=90600, per=100.00%, avg=84752.00, stdev=7971.55, samples=3 00:08:06.748 iops : min=18918, max=22650, avg=21188.00, stdev=1992.89, samples=3 00:08:06.748 write: IOPS=20.7k, BW=80.7MiB/s (84.6MB/s)(161MiB/2001msec); 0 zone resets 00:08:06.749 slat (nsec): min=3532, max=88695, avg=5395.44, stdev=2248.44 00:08:06.749 clat (usec): min=634, max=9988, avg=3097.16, stdev=937.32 00:08:06.749 lat (usec): min=647, max=10006, avg=3102.55, stdev=938.28 00:08:06.749 clat percentiles (usec): 00:08:06.749 | 1.00th=[ 2024], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2507], 00:08:06.749 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2900], 00:08:06.749 | 70.00th=[ 3064], 80.00th=[ 3458], 90.00th=[ 4490], 95.00th=[ 5276], 00:08:06.749 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 7635], 99.95th=[ 7963], 00:08:06.749 | 99.99th=[ 9503] 00:08:06.749 bw ( KiB/s): min=76264, max=90088, per=100.00%, avg=84842.67, stdev=7490.60, samples=3 00:08:06.749 iops : min=19066, max=22522, avg=21210.67, stdev=1872.65, samples=3 00:08:06.749 lat (usec) : 750=0.01%, 1000=0.03% 00:08:06.749 lat (msec) : 2=0.97%, 4=85.15%, 10=13.84%, 20=0.01% 00:08:06.749 cpu : usr=99.05%, sys=0.05%, ctx=4, majf=0, minf=607 00:08:06.749 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:06.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:06.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:06.749 issued rwts: total=41489,41334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:06.749 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:06.749 00:08:06.749 Run status group 0 (all jobs): 00:08:06.749 READ: bw=81.0MiB/s (84.9MB/s), 81.0MiB/s-81.0MiB/s (84.9MB/s-84.9MB/s), io=162MiB (170MB), run=2001-2001msec 00:08:06.749 WRITE: bw=80.7MiB/s (84.6MB/s), 80.7MiB/s-80.7MiB/s (84.6MB/s-84.6MB/s), io=161MiB (169MB), run=2001-2001msec 00:08:06.749 ----------------------------------------------------- 00:08:06.749 Suppressions used: 00:08:06.749 count bytes template 00:08:06.749 1 32 /usr/src/fio/parse.c 00:08:06.749 1 8 libtcmalloc_minimal.so 00:08:06.749 ----------------------------------------------------- 00:08:06.749 00:08:06.749 17:11:49 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:06.749 17:11:49 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:06.749 17:11:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:06.749 17:11:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:06.749 17:11:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:06.749 17:11:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:06.749 17:11:49 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:06.749 17:11:49 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:06.749 17:11:49 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:06.749 17:11:49 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:08:06.749 17:11:49 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:06.749 17:11:49 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:08:06.749 17:11:49 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:06.749 17:11:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:08:06.749 17:11:49 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:08:06.749 17:11:49 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:08:06.749 17:11:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:08:06.749 17:11:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:08:06.749 17:11:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:06.749 17:11:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:06.749 17:11:49 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:06.749 17:11:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:08:06.749 17:11:49 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:06.749 17:11:49 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:07.010 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:07.010 fio-3.35 00:08:07.010 Starting 1 thread 00:08:13.596 00:08:13.596 test: (groupid=0, jobs=1): err= 0: pid=64091: Wed Oct 30 17:11:56 2024 00:08:13.596 read: IOPS=24.3k, BW=94.9MiB/s (99.6MB/s)(190MiB/2001msec) 00:08:13.596 slat (nsec): min=4242, max=85688, avg=4991.45, stdev=2077.80 00:08:13.596 clat (usec): min=240, max=7979, avg=2632.25, stdev=724.94 00:08:13.596 lat (usec): min=245, max=7994, avg=2637.24, stdev=726.21 00:08:13.596 clat percentiles (usec): 00:08:13.596 | 1.00th=[ 1778], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2343], 00:08:13.596 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:08:13.596 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 2933], 95.00th=[ 4424], 00:08:13.596 | 99.00th=[ 5932], 99.50th=[ 6063], 99.90th=[ 7439], 99.95th=[ 7832], 00:08:13.596 | 99.99th=[ 7963] 00:08:13.596 bw ( KiB/s): min=95760, max=98352, per=99.77%, avg=96997.33, stdev=1299.98, samples=3 00:08:13.596 iops : min=23940, max=24588, avg=24249.33, stdev=324.99, samples=3 00:08:13.596 write: IOPS=24.2k, BW=94.3MiB/s (98.9MB/s)(189MiB/2001msec); 0 zone resets 00:08:13.596 slat (usec): min=4, max=206, avg= 5.21, stdev= 2.40 00:08:13.596 clat (usec): min=212, max=8059, avg=2628.74, stdev=712.20 00:08:13.596 lat (usec): min=216, max=8071, avg=2633.95, stdev=713.43 00:08:13.596 clat percentiles (usec): 00:08:13.596 | 1.00th=[ 1778], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2343], 00:08:13.596 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:08:13.596 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 2900], 95.00th=[ 4359], 00:08:13.596 | 99.00th=[ 5866], 99.50th=[ 6063], 99.90th=[ 7308], 99.95th=[ 7767], 00:08:13.596 | 99.99th=[ 7963] 00:08:13.596 bw ( KiB/s): min=96592, max=97960, per=100.00%, avg=97090.67, stdev=755.58, samples=3 00:08:13.596 iops : min=24148, max=24490, avg=24272.67, stdev=188.90, samples=3 00:08:13.596 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:08:13.596 lat (msec) : 2=2.47%, 4=91.35%, 10=6.13% 00:08:13.596 cpu : usr=99.05%, sys=0.15%, ctx=26, majf=0, minf=607 00:08:13.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:13.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:13.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:13.596 issued rwts: total=48635,48328,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:13.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:13.596 00:08:13.596 Run status group 0 (all jobs): 00:08:13.596 READ: bw=94.9MiB/s (99.6MB/s), 94.9MiB/s-94.9MiB/s (99.6MB/s-99.6MB/s), io=190MiB (199MB), run=2001-2001msec 00:08:13.596 WRITE: bw=94.3MiB/s (98.9MB/s), 94.3MiB/s-94.3MiB/s (98.9MB/s-98.9MB/s), io=189MiB (198MB), run=2001-2001msec 00:08:13.596 ----------------------------------------------------- 00:08:13.596 Suppressions used: 00:08:13.596 count bytes template 00:08:13.596 1 32 /usr/src/fio/parse.c 00:08:13.596 1 8 libtcmalloc_minimal.so 00:08:13.596 ----------------------------------------------------- 00:08:13.596 00:08:13.596 17:11:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:13.596 17:11:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:13.597 17:11:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:13.597 17:11:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:13.858 17:11:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:13.858 17:11:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:14.120 17:11:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:14.120 17:11:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:14.120 17:11:56 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:14.120 17:11:56 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:08:14.120 17:11:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:14.120 17:11:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:08:14.120 17:11:56 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:14.120 17:11:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:08:14.120 17:11:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:08:14.120 17:11:56 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:08:14.120 17:11:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:14.120 17:11:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:08:14.120 17:11:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:08:14.120 17:11:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:14.120 17:11:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:14.120 17:11:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:08:14.120 17:11:56 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:14.120 17:11:56 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:14.120 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:14.120 fio-3.35 00:08:14.120 Starting 1 thread 00:08:22.259 00:08:22.259 test: (groupid=0, jobs=1): err= 0: pid=64151: Wed Oct 30 17:12:04 2024 00:08:22.259 read: IOPS=24.2k, BW=94.7MiB/s (99.3MB/s)(189MiB/2001msec) 00:08:22.259 slat (nsec): min=3411, max=72306, avg=5014.04, stdev=2090.69 00:08:22.259 clat (usec): min=217, max=7748, avg=2637.77, stdev=757.65 00:08:22.259 lat (usec): min=221, max=7752, avg=2642.78, stdev=759.00 00:08:22.259 clat percentiles (usec): 00:08:22.259 | 1.00th=[ 1647], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2343], 00:08:22.259 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:08:22.259 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 3097], 95.00th=[ 4555], 00:08:22.259 | 99.00th=[ 5866], 99.50th=[ 6128], 99.90th=[ 7308], 99.95th=[ 7504], 00:08:22.259 | 99.99th=[ 7635] 00:08:22.259 bw ( KiB/s): min=96256, max=96456, per=99.42%, avg=96381.33, stdev=109.20, samples=3 00:08:22.259 iops : min=24064, max=24116, avg=24095.33, stdev=27.59, samples=3 00:08:22.259 write: IOPS=24.1k, BW=94.0MiB/s (98.6MB/s)(188MiB/2001msec); 0 zone resets 00:08:22.259 slat (nsec): min=3549, max=61487, avg=5236.38, stdev=2067.81 00:08:22.259 clat (usec): min=292, max=7659, avg=2639.80, stdev=764.87 00:08:22.259 lat (usec): min=297, max=7672, avg=2645.03, stdev=766.17 00:08:22.259 clat percentiles (usec): 00:08:22.259 | 1.00th=[ 1631], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2343], 00:08:22.259 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:08:22.259 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 3097], 95.00th=[ 4555], 00:08:22.259 | 99.00th=[ 5932], 99.50th=[ 6259], 99.90th=[ 7242], 99.95th=[ 7373], 00:08:22.259 | 99.99th=[ 7570] 00:08:22.259 bw ( KiB/s): min=95904, max=97504, per=100.00%, avg=96522.67, stdev=859.44, samples=3 00:08:22.259 iops : min=23976, max=24376, avg=24130.67, stdev=214.86, samples=3 00:08:22.259 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.05% 00:08:22.259 lat (msec) : 2=3.43%, 4=90.21%, 10=6.27% 00:08:22.259 cpu : usr=99.40%, sys=0.00%, ctx=4, majf=0, minf=607 00:08:22.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:22.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:22.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:22.259 issued rwts: total=48494,48169,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:22.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:22.259 00:08:22.259 Run status group 0 (all jobs): 00:08:22.259 READ: bw=94.7MiB/s (99.3MB/s), 94.7MiB/s-94.7MiB/s (99.3MB/s-99.3MB/s), io=189MiB (199MB), run=2001-2001msec 00:08:22.259 WRITE: bw=94.0MiB/s (98.6MB/s), 94.0MiB/s-94.0MiB/s (98.6MB/s-98.6MB/s), io=188MiB (197MB), run=2001-2001msec 00:08:22.259 ----------------------------------------------------- 00:08:22.259 Suppressions used: 00:08:22.259 count bytes template 00:08:22.259 1 32 /usr/src/fio/parse.c 00:08:22.259 1 8 libtcmalloc_minimal.so 00:08:22.259 ----------------------------------------------------- 00:08:22.259 00:08:22.259 17:12:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:22.259 17:12:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:22.259 17:12:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:22.259 17:12:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:22.259 17:12:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:22.259 17:12:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:22.259 17:12:04 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:22.259 17:12:04 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:22.259 17:12:04 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:22.259 17:12:04 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:08:22.259 17:12:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:22.259 17:12:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:08:22.259 17:12:04 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:22.259 17:12:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:08:22.259 17:12:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:08:22.259 17:12:04 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:08:22.259 17:12:04 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:22.259 17:12:04 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:08:22.259 17:12:04 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:08:22.259 17:12:04 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:22.259 17:12:04 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:22.259 17:12:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:08:22.259 17:12:04 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:22.259 17:12:04 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:22.259 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:22.259 fio-3.35 00:08:22.259 Starting 1 thread 00:08:32.262 00:08:32.262 test: (groupid=0, jobs=1): err= 0: pid=64212: Wed Oct 30 17:12:13 2024 00:08:32.262 read: IOPS=21.9k, BW=85.4MiB/s (89.5MB/s)(171MiB/2001msec) 00:08:32.262 slat (nsec): min=3348, max=73333, avg=4991.28, stdev=2081.09 00:08:32.262 clat (usec): min=386, max=8662, avg=2915.68, stdev=916.02 00:08:32.262 lat (usec): min=390, max=8668, avg=2920.67, stdev=916.89 00:08:32.262 clat percentiles (usec): 00:08:32.262 | 1.00th=[ 1795], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2376], 00:08:32.262 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2704], 00:08:32.262 | 70.00th=[ 2900], 80.00th=[ 3228], 90.00th=[ 4228], 95.00th=[ 5014], 00:08:32.262 | 99.00th=[ 6325], 99.50th=[ 6718], 99.90th=[ 7439], 99.95th=[ 7635], 00:08:32.262 | 99.99th=[ 8225] 00:08:32.262 bw ( KiB/s): min=84440, max=93752, per=100.00%, avg=88738.33, stdev=4697.03, samples=3 00:08:32.262 iops : min=21110, max=23436, avg=22184.33, stdev=1173.10, samples=3 00:08:32.262 write: IOPS=21.7k, BW=84.8MiB/s (88.9MB/s)(170MiB/2001msec); 0 zone resets 00:08:32.262 slat (nsec): min=3516, max=52241, avg=5135.71, stdev=2071.75 00:08:32.262 clat (usec): min=361, max=8641, avg=2938.66, stdev=913.31 00:08:32.262 lat (usec): min=366, max=8647, avg=2943.79, stdev=914.17 00:08:32.262 clat percentiles (usec): 00:08:32.262 | 1.00th=[ 1844], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2376], 00:08:32.262 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2606], 60.00th=[ 2737], 00:08:32.262 | 70.00th=[ 2900], 80.00th=[ 3261], 90.00th=[ 4293], 95.00th=[ 5014], 00:08:32.262 | 99.00th=[ 6325], 99.50th=[ 6652], 99.90th=[ 7439], 99.95th=[ 7635], 00:08:32.262 | 99.99th=[ 7898] 00:08:32.262 bw ( KiB/s): min=86120, max=93168, per=100.00%, avg=88920.33, stdev=3740.27, samples=3 00:08:32.262 iops : min=21530, max=23292, avg=22230.00, stdev=935.12, samples=3 00:08:32.262 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.03% 00:08:32.262 lat (msec) : 2=1.68%, 4=86.30%, 10=11.96% 00:08:32.262 cpu : usr=98.95%, sys=0.25%, ctx=14, majf=0, minf=606 00:08:32.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:32.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:32.262 issued rwts: total=43731,43420,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:32.262 00:08:32.262 Run status group 0 (all jobs): 00:08:32.262 READ: bw=85.4MiB/s (89.5MB/s), 85.4MiB/s-85.4MiB/s (89.5MB/s-89.5MB/s), io=171MiB (179MB), run=2001-2001msec 00:08:32.262 WRITE: bw=84.8MiB/s (88.9MB/s), 84.8MiB/s-84.8MiB/s (88.9MB/s-88.9MB/s), io=170MiB (178MB), run=2001-2001msec 00:08:32.262 ----------------------------------------------------- 00:08:32.262 Suppressions used: 00:08:32.262 count bytes template 00:08:32.262 1 32 /usr/src/fio/parse.c 00:08:32.262 1 8 libtcmalloc_minimal.so 00:08:32.262 ----------------------------------------------------- 00:08:32.262 00:08:32.262 ************************************ 00:08:32.262 END TEST nvme_fio 00:08:32.262 ************************************ 00:08:32.262 17:12:14 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:32.262 17:12:14 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:08:32.262 00:08:32.262 real 0m33.304s 00:08:32.262 user 0m19.262s 00:08:32.262 sys 0m26.172s 00:08:32.262 17:12:14 nvme.nvme_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:32.262 17:12:14 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:08:32.262 00:08:32.262 real 1m41.952s 00:08:32.262 user 3m38.742s 00:08:32.262 sys 0m36.447s 00:08:32.262 17:12:14 nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:32.262 17:12:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:32.262 ************************************ 00:08:32.262 END TEST nvme 00:08:32.262 ************************************ 00:08:32.262 17:12:14 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:08:32.262 17:12:14 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:32.262 17:12:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:32.262 17:12:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:32.262 17:12:14 -- common/autotest_common.sh@10 -- # set +x 00:08:32.262 ************************************ 00:08:32.262 START TEST nvme_scc 00:08:32.262 ************************************ 00:08:32.262 17:12:14 nvme_scc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:32.262 * Looking for test storage... 00:08:32.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:32.262 17:12:14 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:32.262 17:12:14 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:32.262 17:12:14 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:32.262 17:12:14 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@345 -- # : 1 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.262 17:12:14 nvme_scc -- scripts/common.sh@368 -- # return 0 00:08:32.262 17:12:14 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.262 17:12:14 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:32.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.262 --rc genhtml_branch_coverage=1 00:08:32.262 --rc genhtml_function_coverage=1 00:08:32.262 --rc genhtml_legend=1 00:08:32.262 --rc geninfo_all_blocks=1 00:08:32.262 --rc geninfo_unexecuted_blocks=1 00:08:32.262 00:08:32.262 ' 00:08:32.262 17:12:14 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:32.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.262 --rc genhtml_branch_coverage=1 00:08:32.262 --rc genhtml_function_coverage=1 00:08:32.262 --rc genhtml_legend=1 00:08:32.262 --rc geninfo_all_blocks=1 00:08:32.262 --rc geninfo_unexecuted_blocks=1 00:08:32.262 00:08:32.262 ' 00:08:32.262 17:12:14 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:32.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.262 --rc genhtml_branch_coverage=1 00:08:32.262 --rc genhtml_function_coverage=1 00:08:32.262 --rc genhtml_legend=1 00:08:32.262 --rc geninfo_all_blocks=1 00:08:32.262 --rc geninfo_unexecuted_blocks=1 00:08:32.262 00:08:32.262 ' 00:08:32.262 17:12:14 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:32.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.262 --rc genhtml_branch_coverage=1 00:08:32.262 --rc genhtml_function_coverage=1 00:08:32.262 --rc genhtml_legend=1 00:08:32.262 --rc geninfo_all_blocks=1 00:08:32.262 --rc geninfo_unexecuted_blocks=1 00:08:32.262 00:08:32.262 ' 00:08:32.262 17:12:14 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:32.262 17:12:14 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:32.262 17:12:14 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:32.262 17:12:14 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:32.262 17:12:14 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:32.263 17:12:14 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:32.263 17:12:14 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.263 17:12:14 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.263 17:12:14 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.263 17:12:14 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.263 17:12:14 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.263 17:12:14 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.263 17:12:14 nvme_scc -- paths/export.sh@5 -- # export PATH 00:08:32.263 17:12:14 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.263 17:12:14 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:08:32.263 17:12:14 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:32.263 17:12:14 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:08:32.263 17:12:14 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:32.263 17:12:14 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:08:32.263 17:12:14 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:32.263 17:12:14 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:32.263 17:12:14 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:32.263 17:12:14 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:08:32.263 17:12:14 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:32.263 17:12:14 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:08:32.263 17:12:14 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:08:32.263 17:12:14 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:08:32.263 17:12:14 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:32.263 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:32.263 Waiting for block devices as requested 00:08:32.263 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:32.263 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:32.263 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:32.263 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:37.607 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:37.607 17:12:20 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:08:37.607 17:12:20 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:37.607 17:12:20 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:37.607 17:12:20 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:37.607 17:12:20 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.607 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.608 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.609 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:08:37.610 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.611 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:08:37.612 17:12:20 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:37.612 17:12:20 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:37.612 17:12:20 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:37.612 17:12:20 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:08:37.612 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.613 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.614 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.615 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.616 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:08:37.617 17:12:20 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:37.617 17:12:20 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:37.617 17:12:20 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:37.617 17:12:20 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.617 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:08:37.618 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.619 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:08:37.620 17:12:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:08:37.621 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:08:37.622 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:08:37.623 17:12:20 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:37.623 17:12:20 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:37.623 17:12:20 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:37.623 17:12:20 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.623 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.624 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.625 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:08:37.626 17:12:20 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:08:37.626 17:12:20 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:08:37.626 17:12:20 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:08:37.626 17:12:20 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:08:37.626 17:12:20 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:38.199 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:38.460 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:38.460 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:38.460 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:38.460 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:38.722 17:12:21 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:38.722 17:12:21 nvme_scc -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:38.722 17:12:21 nvme_scc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:38.722 17:12:21 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:38.722 ************************************ 00:08:38.722 START TEST nvme_simple_copy 00:08:38.722 ************************************ 00:08:38.722 17:12:21 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:38.983 Initializing NVMe Controllers 00:08:38.983 Attaching to 0000:00:10.0 00:08:38.983 Controller supports SCC. Attached to 0000:00:10.0 00:08:38.983 Namespace ID: 1 size: 6GB 00:08:38.983 Initialization complete. 00:08:38.983 00:08:38.983 Controller QEMU NVMe Ctrl (12340 ) 00:08:38.983 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:08:38.983 Namespace Block Size:4096 00:08:38.983 Writing LBAs 0 to 63 with Random Data 00:08:38.983 Copied LBAs from 0 - 63 to the Destination LBA 256 00:08:38.983 LBAs matching Written Data: 64 00:08:38.983 00:08:38.983 real 0m0.268s 00:08:38.983 user 0m0.090s 00:08:38.983 sys 0m0.076s 00:08:38.983 17:12:21 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:38.983 ************************************ 00:08:38.983 END TEST nvme_simple_copy 00:08:38.983 17:12:21 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:08:38.983 ************************************ 00:08:38.983 00:08:38.983 real 0m7.523s 00:08:38.983 user 0m1.037s 00:08:38.983 sys 0m1.326s 00:08:38.983 17:12:21 nvme_scc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:38.983 17:12:21 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:38.983 ************************************ 00:08:38.983 END TEST nvme_scc 00:08:38.983 ************************************ 00:08:38.983 17:12:21 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:08:38.983 17:12:21 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:08:38.983 17:12:21 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:08:38.983 17:12:21 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:08:38.984 17:12:21 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:08:38.984 17:12:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:38.984 17:12:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:38.984 17:12:21 -- common/autotest_common.sh@10 -- # set +x 00:08:38.984 ************************************ 00:08:38.984 START TEST nvme_fdp 00:08:38.984 ************************************ 00:08:38.984 17:12:21 nvme_fdp -- common/autotest_common.sh@1127 -- # test/nvme/nvme_fdp.sh 00:08:38.984 * Looking for test storage... 00:08:38.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:38.984 17:12:21 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:38.984 17:12:21 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:08:38.984 17:12:21 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:38.984 17:12:21 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.984 17:12:21 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:08:38.984 17:12:21 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.984 17:12:21 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:38.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.984 --rc genhtml_branch_coverage=1 00:08:38.984 --rc genhtml_function_coverage=1 00:08:38.984 --rc genhtml_legend=1 00:08:38.984 --rc geninfo_all_blocks=1 00:08:38.984 --rc geninfo_unexecuted_blocks=1 00:08:38.984 00:08:38.984 ' 00:08:38.984 17:12:21 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:38.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.984 --rc genhtml_branch_coverage=1 00:08:38.984 --rc genhtml_function_coverage=1 00:08:38.984 --rc genhtml_legend=1 00:08:38.984 --rc geninfo_all_blocks=1 00:08:38.984 --rc geninfo_unexecuted_blocks=1 00:08:38.984 00:08:38.984 ' 00:08:38.984 17:12:21 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:38.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.984 --rc genhtml_branch_coverage=1 00:08:38.984 --rc genhtml_function_coverage=1 00:08:38.984 --rc genhtml_legend=1 00:08:38.984 --rc geninfo_all_blocks=1 00:08:38.984 --rc geninfo_unexecuted_blocks=1 00:08:38.984 00:08:38.984 ' 00:08:38.984 17:12:21 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:38.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.984 --rc genhtml_branch_coverage=1 00:08:38.984 --rc genhtml_function_coverage=1 00:08:38.984 --rc genhtml_legend=1 00:08:38.984 --rc geninfo_all_blocks=1 00:08:38.984 --rc geninfo_unexecuted_blocks=1 00:08:38.984 00:08:38.984 ' 00:08:38.984 17:12:21 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:39.247 17:12:21 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:39.247 17:12:21 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:39.247 17:12:21 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:39.247 17:12:21 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:39.247 17:12:21 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.247 17:12:21 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.247 17:12:21 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.247 17:12:21 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.247 17:12:21 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.247 17:12:21 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.247 17:12:21 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.247 17:12:21 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:08:39.247 17:12:21 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.247 17:12:21 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:08:39.247 17:12:21 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:39.247 17:12:21 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:08:39.247 17:12:21 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:39.247 17:12:21 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:08:39.247 17:12:21 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:39.247 17:12:21 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:39.247 17:12:21 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:39.247 17:12:21 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:08:39.247 17:12:21 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:39.247 17:12:21 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:39.509 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:39.509 Waiting for block devices as requested 00:08:39.509 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:39.768 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:39.768 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:40.027 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:45.304 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:45.304 17:12:27 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:08:45.304 17:12:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:45.304 17:12:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:45.304 17:12:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:45.304 17:12:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.304 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.305 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:08:45.306 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:08:45.307 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:08:45.308 17:12:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:45.308 17:12:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:45.308 17:12:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:45.308 17:12:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.308 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.309 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:08:45.310 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.311 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:08:45.312 17:12:27 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:08:45.313 17:12:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:45.313 17:12:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:45.313 17:12:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:45.313 17:12:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:08:45.313 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.314 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:08:45.315 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:08:45.316 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.317 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.318 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.319 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:08:45.320 17:12:28 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:45.320 17:12:28 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:45.320 17:12:28 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:45.320 17:12:28 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:08:45.320 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.321 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:08:45.322 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:08:45.323 17:12:28 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:08:45.323 17:12:28 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:08:45.323 17:12:28 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:08:45.323 17:12:28 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:08:45.323 17:12:28 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:45.893 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:46.152 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:46.410 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:46.410 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:46.410 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:46.410 17:12:29 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:08:46.410 17:12:29 nvme_fdp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:46.410 17:12:29 nvme_fdp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:46.410 17:12:29 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:08:46.410 ************************************ 00:08:46.410 START TEST nvme_flexible_data_placement 00:08:46.410 ************************************ 00:08:46.410 17:12:29 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:08:46.669 Initializing NVMe Controllers 00:08:46.669 Attaching to 0000:00:13.0 00:08:46.669 Controller supports FDP Attached to 0000:00:13.0 00:08:46.669 Namespace ID: 1 Endurance Group ID: 1 00:08:46.669 Initialization complete. 00:08:46.669 00:08:46.669 ================================== 00:08:46.669 == FDP tests for Namespace: #01 == 00:08:46.669 ================================== 00:08:46.669 00:08:46.669 Get Feature: FDP: 00:08:46.669 ================= 00:08:46.669 Enabled: Yes 00:08:46.669 FDP configuration Index: 0 00:08:46.669 00:08:46.669 FDP configurations log page 00:08:46.669 =========================== 00:08:46.669 Number of FDP configurations: 1 00:08:46.669 Version: 0 00:08:46.669 Size: 112 00:08:46.669 FDP Configuration Descriptor: 0 00:08:46.669 Descriptor Size: 96 00:08:46.669 Reclaim Group Identifier format: 2 00:08:46.669 FDP Volatile Write Cache: Not Present 00:08:46.669 FDP Configuration: Valid 00:08:46.669 Vendor Specific Size: 0 00:08:46.669 Number of Reclaim Groups: 2 00:08:46.669 Number of Recalim Unit Handles: 8 00:08:46.669 Max Placement Identifiers: 128 00:08:46.669 Number of Namespaces Suppprted: 256 00:08:46.669 Reclaim unit Nominal Size: 6000000 bytes 00:08:46.669 Estimated Reclaim Unit Time Limit: Not Reported 00:08:46.669 RUH Desc #000: RUH Type: Initially Isolated 00:08:46.669 RUH Desc #001: RUH Type: Initially Isolated 00:08:46.669 RUH Desc #002: RUH Type: Initially Isolated 00:08:46.669 RUH Desc #003: RUH Type: Initially Isolated 00:08:46.669 RUH Desc #004: RUH Type: Initially Isolated 00:08:46.669 RUH Desc #005: RUH Type: Initially Isolated 00:08:46.669 RUH Desc #006: RUH Type: Initially Isolated 00:08:46.669 RUH Desc #007: RUH Type: Initially Isolated 00:08:46.669 00:08:46.669 FDP reclaim unit handle usage log page 00:08:46.669 ====================================== 00:08:46.669 Number of Reclaim Unit Handles: 8 00:08:46.669 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:46.669 RUH Usage Desc #001: RUH Attributes: Unused 00:08:46.669 RUH Usage Desc #002: RUH Attributes: Unused 00:08:46.669 RUH Usage Desc #003: RUH Attributes: Unused 00:08:46.669 RUH Usage Desc #004: RUH Attributes: Unused 00:08:46.669 RUH Usage Desc #005: RUH Attributes: Unused 00:08:46.669 RUH Usage Desc #006: RUH Attributes: Unused 00:08:46.669 RUH Usage Desc #007: RUH Attributes: Unused 00:08:46.669 00:08:46.669 FDP statistics log page 00:08:46.669 ======================= 00:08:46.669 Host bytes with metadata written: 1141075968 00:08:46.669 Media bytes with metadata written: 1141219328 00:08:46.669 Media bytes erased: 0 00:08:46.669 00:08:46.669 FDP Reclaim unit handle status 00:08:46.669 ============================== 00:08:46.669 Number of RUHS descriptors: 2 00:08:46.669 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003fc9 00:08:46.669 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:08:46.669 00:08:46.669 FDP write on placement id: 0 success 00:08:46.669 00:08:46.669 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:08:46.669 00:08:46.669 IO mgmt send: RUH update for Placement ID: #0 Success 00:08:46.669 00:08:46.669 Get Feature: FDP Events for Placement handle: #0 00:08:46.669 ======================== 00:08:46.669 Number of FDP Events: 6 00:08:46.669 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:08:46.669 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:08:46.669 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:08:46.669 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:08:46.669 FDP Event: #4 Type: Media Reallocated Enabled: No 00:08:46.669 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:08:46.669 00:08:46.669 FDP events log page 00:08:46.669 =================== 00:08:46.669 Number of FDP events: 1 00:08:46.669 FDP Event #0: 00:08:46.669 Event Type: RU Not Written to Capacity 00:08:46.669 Placement Identifier: Valid 00:08:46.669 NSID: Valid 00:08:46.669 Location: Valid 00:08:46.669 Placement Identifier: 0 00:08:46.669 Event Timestamp: 6 00:08:46.669 Namespace Identifier: 1 00:08:46.669 Reclaim Group Identifier: 0 00:08:46.669 Reclaim Unit Handle Identifier: 0 00:08:46.669 00:08:46.669 FDP test passed 00:08:46.669 00:08:46.669 real 0m0.234s 00:08:46.669 user 0m0.071s 00:08:46.669 sys 0m0.062s 00:08:46.669 17:12:29 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:46.669 ************************************ 00:08:46.669 END TEST nvme_flexible_data_placement 00:08:46.669 ************************************ 00:08:46.669 17:12:29 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:08:46.669 00:08:46.669 real 0m7.753s 00:08:46.669 user 0m1.009s 00:08:46.669 sys 0m1.426s 00:08:46.669 17:12:29 nvme_fdp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:46.669 ************************************ 00:08:46.669 END TEST nvme_fdp 00:08:46.669 ************************************ 00:08:46.669 17:12:29 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:08:46.669 17:12:29 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:08:46.669 17:12:29 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:08:46.669 17:12:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:46.669 17:12:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:46.669 17:12:29 -- common/autotest_common.sh@10 -- # set +x 00:08:46.669 ************************************ 00:08:46.669 START TEST nvme_rpc 00:08:46.669 ************************************ 00:08:46.669 17:12:29 nvme_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:08:46.926 * Looking for test storage... 00:08:46.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:46.926 17:12:29 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:46.926 17:12:29 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:46.926 17:12:29 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:46.926 17:12:29 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:46.926 17:12:29 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.926 17:12:29 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.926 17:12:29 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.926 17:12:29 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.926 17:12:29 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.926 17:12:29 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.926 17:12:29 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.926 17:12:29 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.926 17:12:29 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.926 17:12:29 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.926 17:12:29 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.926 17:12:29 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:46.927 17:12:29 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:08:46.927 17:12:29 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.927 17:12:29 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.927 17:12:29 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:46.927 17:12:29 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:08:46.927 17:12:29 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.927 17:12:29 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:08:46.927 17:12:29 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.927 17:12:29 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:46.927 17:12:29 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:08:46.927 17:12:29 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.927 17:12:29 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:08:46.927 17:12:29 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.927 17:12:29 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.927 17:12:29 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.927 17:12:29 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:46.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.927 --rc genhtml_branch_coverage=1 00:08:46.927 --rc genhtml_function_coverage=1 00:08:46.927 --rc genhtml_legend=1 00:08:46.927 --rc geninfo_all_blocks=1 00:08:46.927 --rc geninfo_unexecuted_blocks=1 00:08:46.927 00:08:46.927 ' 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:46.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.927 --rc genhtml_branch_coverage=1 00:08:46.927 --rc genhtml_function_coverage=1 00:08:46.927 --rc genhtml_legend=1 00:08:46.927 --rc geninfo_all_blocks=1 00:08:46.927 --rc geninfo_unexecuted_blocks=1 00:08:46.927 00:08:46.927 ' 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:46.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.927 --rc genhtml_branch_coverage=1 00:08:46.927 --rc genhtml_function_coverage=1 00:08:46.927 --rc genhtml_legend=1 00:08:46.927 --rc geninfo_all_blocks=1 00:08:46.927 --rc geninfo_unexecuted_blocks=1 00:08:46.927 00:08:46.927 ' 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:46.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.927 --rc genhtml_branch_coverage=1 00:08:46.927 --rc genhtml_function_coverage=1 00:08:46.927 --rc genhtml_legend=1 00:08:46.927 --rc geninfo_all_blocks=1 00:08:46.927 --rc geninfo_unexecuted_blocks=1 00:08:46.927 00:08:46.927 ' 00:08:46.927 17:12:29 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.927 17:12:29 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:08:46.927 17:12:29 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:08:46.927 17:12:29 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65572 00:08:46.927 17:12:29 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:08:46.927 17:12:29 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65572 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@833 -- # '[' -z 65572 ']' 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:46.927 17:12:29 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:46.927 17:12:29 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.184 [2024-10-30 17:12:29.910692] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:08:47.184 [2024-10-30 17:12:29.910814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65572 ] 00:08:47.184 [2024-10-30 17:12:30.072677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:47.442 [2024-10-30 17:12:30.172915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.442 [2024-10-30 17:12:30.173054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.009 17:12:30 nvme_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:48.009 17:12:30 nvme_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:48.009 17:12:30 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:08:48.268 Nvme0n1 00:08:48.268 17:12:31 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:08:48.268 17:12:31 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:08:48.268 request: 00:08:48.268 { 00:08:48.268 "bdev_name": "Nvme0n1", 00:08:48.268 "filename": "non_existing_file", 00:08:48.268 "method": "bdev_nvme_apply_firmware", 00:08:48.268 "req_id": 1 00:08:48.268 } 00:08:48.268 Got JSON-RPC error response 00:08:48.268 response: 00:08:48.268 { 00:08:48.268 "code": -32603, 00:08:48.268 "message": "open file failed." 00:08:48.268 } 00:08:48.268 17:12:31 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:08:48.268 17:12:31 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:08:48.268 17:12:31 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:08:48.527 17:12:31 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:48.527 17:12:31 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65572 00:08:48.527 17:12:31 nvme_rpc -- common/autotest_common.sh@952 -- # '[' -z 65572 ']' 00:08:48.527 17:12:31 nvme_rpc -- common/autotest_common.sh@956 -- # kill -0 65572 00:08:48.527 17:12:31 nvme_rpc -- common/autotest_common.sh@957 -- # uname 00:08:48.527 17:12:31 nvme_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:48.527 17:12:31 nvme_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65572 00:08:48.527 17:12:31 nvme_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:48.527 killing process with pid 65572 00:08:48.527 17:12:31 nvme_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:48.527 17:12:31 nvme_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65572' 00:08:48.527 17:12:31 nvme_rpc -- common/autotest_common.sh@971 -- # kill 65572 00:08:48.527 17:12:31 nvme_rpc -- common/autotest_common.sh@976 -- # wait 65572 00:08:49.904 00:08:49.904 real 0m3.232s 00:08:49.904 user 0m6.096s 00:08:49.904 sys 0m0.515s 00:08:49.904 17:12:32 nvme_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:49.904 17:12:32 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.904 ************************************ 00:08:49.904 END TEST nvme_rpc 00:08:49.904 ************************************ 00:08:50.164 17:12:32 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:08:50.164 17:12:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:50.164 17:12:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:50.164 17:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:50.164 ************************************ 00:08:50.164 START TEST nvme_rpc_timeouts 00:08:50.164 ************************************ 00:08:50.164 17:12:32 nvme_rpc_timeouts -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:08:50.164 * Looking for test storage... 00:08:50.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:50.164 17:12:32 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:50.164 17:12:32 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:08:50.164 17:12:32 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:50.164 17:12:33 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.164 17:12:33 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:08:50.164 17:12:33 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.164 17:12:33 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:50.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.164 --rc genhtml_branch_coverage=1 00:08:50.164 --rc genhtml_function_coverage=1 00:08:50.164 --rc genhtml_legend=1 00:08:50.164 --rc geninfo_all_blocks=1 00:08:50.164 --rc geninfo_unexecuted_blocks=1 00:08:50.164 00:08:50.164 ' 00:08:50.164 17:12:33 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:50.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.164 --rc genhtml_branch_coverage=1 00:08:50.164 --rc genhtml_function_coverage=1 00:08:50.164 --rc genhtml_legend=1 00:08:50.164 --rc geninfo_all_blocks=1 00:08:50.164 --rc geninfo_unexecuted_blocks=1 00:08:50.164 00:08:50.164 ' 00:08:50.164 17:12:33 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:50.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.164 --rc genhtml_branch_coverage=1 00:08:50.164 --rc genhtml_function_coverage=1 00:08:50.164 --rc genhtml_legend=1 00:08:50.164 --rc geninfo_all_blocks=1 00:08:50.164 --rc geninfo_unexecuted_blocks=1 00:08:50.164 00:08:50.164 ' 00:08:50.164 17:12:33 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:50.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.164 --rc genhtml_branch_coverage=1 00:08:50.164 --rc genhtml_function_coverage=1 00:08:50.164 --rc genhtml_legend=1 00:08:50.164 --rc geninfo_all_blocks=1 00:08:50.164 --rc geninfo_unexecuted_blocks=1 00:08:50.164 00:08:50.164 ' 00:08:50.164 17:12:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.164 17:12:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65632 00:08:50.164 17:12:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65632 00:08:50.164 17:12:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65669 00:08:50.164 17:12:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:08:50.164 17:12:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65669 00:08:50.164 17:12:33 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # '[' -z 65669 ']' 00:08:50.164 17:12:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:08:50.164 17:12:33 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.164 17:12:33 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:50.164 17:12:33 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.164 17:12:33 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:50.164 17:12:33 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:08:50.164 [2024-10-30 17:12:33.118479] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:08:50.164 [2024-10-30 17:12:33.118775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65669 ] 00:08:50.424 [2024-10-30 17:12:33.279659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:50.424 [2024-10-30 17:12:33.377214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.424 [2024-10-30 17:12:33.377253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.001 17:12:33 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:51.001 17:12:33 nvme_rpc_timeouts -- common/autotest_common.sh@866 -- # return 0 00:08:51.001 17:12:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:08:51.001 Checking default timeout settings: 00:08:51.001 17:12:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:08:51.567 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:08:51.568 Making settings changes with rpc: 00:08:51.568 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:08:51.568 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:08:51.568 Check default vs. modified settings: 00:08:51.568 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65632 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65632 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:52.136 Setting action_on_timeout is changed as expected. 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65632 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65632 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:52.136 Setting timeout_us is changed as expected. 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65632 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65632 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:52.136 Setting timeout_admin_us is changed as expected. 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65632 /tmp/settings_modified_65632 00:08:52.136 17:12:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65669 00:08:52.136 17:12:34 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # '[' -z 65669 ']' 00:08:52.136 17:12:34 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # kill -0 65669 00:08:52.136 17:12:34 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # uname 00:08:52.136 17:12:34 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:52.136 17:12:34 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65669 00:08:52.136 killing process with pid 65669 00:08:52.136 17:12:34 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:52.136 17:12:34 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:52.136 17:12:34 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65669' 00:08:52.136 17:12:34 nvme_rpc_timeouts -- common/autotest_common.sh@971 -- # kill 65669 00:08:52.136 17:12:34 nvme_rpc_timeouts -- common/autotest_common.sh@976 -- # wait 65669 00:08:53.513 RPC TIMEOUT SETTING TEST PASSED. 00:08:53.513 17:12:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:08:53.513 00:08:53.513 real 0m3.324s 00:08:53.513 user 0m6.515s 00:08:53.513 sys 0m0.469s 00:08:53.513 17:12:36 nvme_rpc_timeouts -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:53.513 17:12:36 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:08:53.513 ************************************ 00:08:53.513 END TEST nvme_rpc_timeouts 00:08:53.513 ************************************ 00:08:53.513 17:12:36 -- spdk/autotest.sh@239 -- # uname -s 00:08:53.513 17:12:36 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:08:53.513 17:12:36 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:08:53.513 17:12:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:53.513 17:12:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:53.513 17:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:53.513 ************************************ 00:08:53.513 START TEST sw_hotplug 00:08:53.513 ************************************ 00:08:53.513 17:12:36 sw_hotplug -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:08:53.513 * Looking for test storage... 00:08:53.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:53.513 17:12:36 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:53.513 17:12:36 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:08:53.513 17:12:36 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:53.513 17:12:36 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:53.513 17:12:36 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.513 17:12:36 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.513 17:12:36 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.513 17:12:36 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.513 17:12:36 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.513 17:12:36 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.513 17:12:36 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.513 17:12:36 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.513 17:12:36 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.513 17:12:36 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.513 17:12:36 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.513 17:12:36 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:08:53.513 17:12:36 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:08:53.513 17:12:36 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.513 17:12:36 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.513 17:12:36 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:08:53.513 17:12:36 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:08:53.513 17:12:36 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.513 17:12:36 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:08:53.513 17:12:36 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.514 17:12:36 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:08:53.514 17:12:36 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:08:53.514 17:12:36 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.514 17:12:36 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:08:53.514 17:12:36 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.514 17:12:36 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.514 17:12:36 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.514 17:12:36 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:08:53.514 17:12:36 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.514 17:12:36 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:53.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.514 --rc genhtml_branch_coverage=1 00:08:53.514 --rc genhtml_function_coverage=1 00:08:53.514 --rc genhtml_legend=1 00:08:53.514 --rc geninfo_all_blocks=1 00:08:53.514 --rc geninfo_unexecuted_blocks=1 00:08:53.514 00:08:53.514 ' 00:08:53.514 17:12:36 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:53.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.514 --rc genhtml_branch_coverage=1 00:08:53.514 --rc genhtml_function_coverage=1 00:08:53.514 --rc genhtml_legend=1 00:08:53.514 --rc geninfo_all_blocks=1 00:08:53.514 --rc geninfo_unexecuted_blocks=1 00:08:53.514 00:08:53.514 ' 00:08:53.514 17:12:36 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:53.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.514 --rc genhtml_branch_coverage=1 00:08:53.514 --rc genhtml_function_coverage=1 00:08:53.514 --rc genhtml_legend=1 00:08:53.514 --rc geninfo_all_blocks=1 00:08:53.514 --rc geninfo_unexecuted_blocks=1 00:08:53.514 00:08:53.514 ' 00:08:53.514 17:12:36 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:53.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.514 --rc genhtml_branch_coverage=1 00:08:53.514 --rc genhtml_function_coverage=1 00:08:53.514 --rc genhtml_legend=1 00:08:53.514 --rc geninfo_all_blocks=1 00:08:53.514 --rc geninfo_unexecuted_blocks=1 00:08:53.514 00:08:53.514 ' 00:08:53.514 17:12:36 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:53.772 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:54.031 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:54.031 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:54.031 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:54.031 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:54.031 17:12:36 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:08:54.031 17:12:36 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:08:54.031 17:12:36 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:08:54.031 17:12:36 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@233 -- # local class 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:08:54.031 17:12:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:08:54.032 17:12:36 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:54.032 17:12:36 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:08:54.032 17:12:36 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:08:54.032 17:12:36 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:54.291 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:54.549 Waiting for block devices as requested 00:08:54.549 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.549 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.549 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.549 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:59.824 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:59.824 17:12:42 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:08:59.824 17:12:42 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:00.085 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:00.086 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:00.086 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:00.347 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:00.608 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:00.608 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:00.608 17:12:43 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:00.608 17:12:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:00.608 17:12:43 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:00.608 17:12:43 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:00.608 17:12:43 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66524 00:09:00.608 17:12:43 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:00.608 17:12:43 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:00.608 17:12:43 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:00.608 17:12:43 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:00.608 17:12:43 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:00.608 17:12:43 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:00.608 17:12:43 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:00.608 17:12:43 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:00.608 17:12:43 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:09:00.608 17:12:43 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:00.608 17:12:43 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:00.608 17:12:43 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:00.608 17:12:43 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:00.608 17:12:43 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:00.868 Initializing NVMe Controllers 00:09:00.868 Attaching to 0000:00:10.0 00:09:00.868 Attaching to 0000:00:11.0 00:09:00.868 Attached to 0000:00:10.0 00:09:00.868 Attached to 0000:00:11.0 00:09:00.868 Initialization complete. Starting I/O... 00:09:00.868 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:00.868 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:00.868 00:09:01.805 QEMU NVMe Ctrl (12340 ): 2517 I/Os completed (+2517) 00:09:01.805 QEMU NVMe Ctrl (12341 ): 2516 I/Os completed (+2516) 00:09:01.805 00:09:03.180 QEMU NVMe Ctrl (12340 ): 5595 I/Os completed (+3078) 00:09:03.180 QEMU NVMe Ctrl (12341 ): 5579 I/Os completed (+3063) 00:09:03.180 00:09:03.825 QEMU NVMe Ctrl (12340 ): 8795 I/Os completed (+3200) 00:09:03.825 QEMU NVMe Ctrl (12341 ): 8790 I/Os completed (+3211) 00:09:03.825 00:09:04.759 QEMU NVMe Ctrl (12340 ): 12491 I/Os completed (+3696) 00:09:04.759 QEMU NVMe Ctrl (12341 ): 12468 I/Os completed (+3678) 00:09:04.759 00:09:06.135 QEMU NVMe Ctrl (12340 ): 15600 I/Os completed (+3109) 00:09:06.135 QEMU NVMe Ctrl (12341 ): 15605 I/Os completed (+3137) 00:09:06.135 00:09:06.703 17:12:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:06.703 17:12:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:06.703 17:12:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:06.703 [2024-10-30 17:12:49.539623] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:06.703 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:06.703 [2024-10-30 17:12:49.540928] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:06.703 [2024-10-30 17:12:49.540985] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:06.703 [2024-10-30 17:12:49.541009] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:06.703 [2024-10-30 17:12:49.541033] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:06.703 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:06.703 [2024-10-30 17:12:49.543084] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:06.703 [2024-10-30 17:12:49.543140] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:06.703 [2024-10-30 17:12:49.543154] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:06.703 [2024-10-30 17:12:49.543167] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:06.703 17:12:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:06.703 17:12:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:06.703 [2024-10-30 17:12:49.562328] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:06.703 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:06.703 [2024-10-30 17:12:49.563546] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:06.703 [2024-10-30 17:12:49.563587] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:06.703 [2024-10-30 17:12:49.563606] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:06.703 [2024-10-30 17:12:49.563621] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:06.703 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:06.703 [2024-10-30 17:12:49.565283] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:06.703 [2024-10-30 17:12:49.565312] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:06.703 [2024-10-30 17:12:49.565327] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:06.703 [2024-10-30 17:12:49.565340] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:06.703 17:12:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:06.703 17:12:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:06.703 17:12:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:06.703 17:12:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:06.703 17:12:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:06.960 17:12:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:06.960 00:09:06.960 17:12:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:06.960 17:12:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:06.960 17:12:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:06.960 17:12:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:06.960 Attaching to 0000:00:10.0 00:09:06.960 Attached to 0000:00:10.0 00:09:06.960 17:12:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:06.960 17:12:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:06.960 17:12:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:06.960 Attaching to 0000:00:11.0 00:09:06.960 Attached to 0000:00:11.0 00:09:07.893 QEMU NVMe Ctrl (12340 ): 3136 I/Os completed (+3136) 00:09:07.893 QEMU NVMe Ctrl (12341 ): 2891 I/Os completed (+2891) 00:09:07.893 00:09:08.827 QEMU NVMe Ctrl (12340 ): 6396 I/Os completed (+3260) 00:09:08.827 QEMU NVMe Ctrl (12341 ): 6154 I/Os completed (+3263) 00:09:08.827 00:09:09.762 QEMU NVMe Ctrl (12340 ): 10076 I/Os completed (+3680) 00:09:09.762 QEMU NVMe Ctrl (12341 ): 9826 I/Os completed (+3672) 00:09:09.762 00:09:11.138 QEMU NVMe Ctrl (12340 ): 13744 I/Os completed (+3668) 00:09:11.138 QEMU NVMe Ctrl (12341 ): 13511 I/Os completed (+3685) 00:09:11.138 00:09:12.072 QEMU NVMe Ctrl (12340 ): 17002 I/Os completed (+3258) 00:09:12.072 QEMU NVMe Ctrl (12341 ): 16766 I/Os completed (+3255) 00:09:12.072 00:09:13.007 QEMU NVMe Ctrl (12340 ): 20699 I/Os completed (+3697) 00:09:13.007 QEMU NVMe Ctrl (12341 ): 20471 I/Os completed (+3705) 00:09:13.007 00:09:13.942 QEMU NVMe Ctrl (12340 ): 24176 I/Os completed (+3477) 00:09:13.942 QEMU NVMe Ctrl (12341 ): 23965 I/Os completed (+3494) 00:09:13.942 00:09:14.885 QEMU NVMe Ctrl (12340 ): 27372 I/Os completed (+3196) 00:09:14.885 QEMU NVMe Ctrl (12341 ): 27161 I/Os completed (+3196) 00:09:14.885 00:09:15.833 QEMU NVMe Ctrl (12340 ): 30780 I/Os completed (+3408) 00:09:15.833 QEMU NVMe Ctrl (12341 ): 30578 I/Os completed (+3417) 00:09:15.833 00:09:16.767 QEMU NVMe Ctrl (12340 ): 34420 I/Os completed (+3640) 00:09:16.767 QEMU NVMe Ctrl (12341 ): 34202 I/Os completed (+3624) 00:09:16.767 00:09:18.139 QEMU NVMe Ctrl (12340 ): 38043 I/Os completed (+3623) 00:09:18.139 QEMU NVMe Ctrl (12341 ): 37840 I/Os completed (+3638) 00:09:18.139 00:09:19.069 QEMU NVMe Ctrl (12340 ): 41708 I/Os completed (+3665) 00:09:19.069 QEMU NVMe Ctrl (12341 ): 41484 I/Os completed (+3644) 00:09:19.069 00:09:19.069 17:13:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:19.069 17:13:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:19.069 17:13:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:19.069 17:13:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:19.069 [2024-10-30 17:13:01.825330] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:19.069 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:19.069 [2024-10-30 17:13:01.826399] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.069 [2024-10-30 17:13:01.826512] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.069 [2024-10-30 17:13:01.826542] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.069 [2024-10-30 17:13:01.826598] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.069 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:19.069 [2024-10-30 17:13:01.828260] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.069 [2024-10-30 17:13:01.828683] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.069 [2024-10-30 17:13:01.828786] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.069 [2024-10-30 17:13:01.828833] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.069 17:13:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:19.069 17:13:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:19.069 [2024-10-30 17:13:01.846500] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:19.069 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:19.069 [2024-10-30 17:13:01.848020] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.069 [2024-10-30 17:13:01.848062] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.069 [2024-10-30 17:13:01.848082] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.069 [2024-10-30 17:13:01.848097] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.069 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:19.069 [2024-10-30 17:13:01.849770] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.069 [2024-10-30 17:13:01.849809] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.069 [2024-10-30 17:13:01.849824] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.069 [2024-10-30 17:13:01.849838] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.069 17:13:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:19.069 17:13:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:19.069 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:19.069 EAL: Scan for (pci) bus failed. 00:09:19.069 17:13:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:19.069 17:13:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:19.069 17:13:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:19.069 17:13:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:19.325 17:13:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:19.325 17:13:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:19.325 17:13:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:19.325 17:13:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:19.325 Attaching to 0000:00:10.0 00:09:19.325 Attached to 0000:00:10.0 00:09:19.325 17:13:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:19.325 17:13:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:19.325 17:13:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:19.325 Attaching to 0000:00:11.0 00:09:19.325 Attached to 0000:00:11.0 00:09:19.889 QEMU NVMe Ctrl (12340 ): 2116 I/Os completed (+2116) 00:09:19.889 QEMU NVMe Ctrl (12341 ): 1897 I/Os completed (+1897) 00:09:19.889 00:09:20.822 QEMU NVMe Ctrl (12340 ): 5622 I/Os completed (+3506) 00:09:20.822 QEMU NVMe Ctrl (12341 ): 5406 I/Os completed (+3509) 00:09:20.822 00:09:21.757 QEMU NVMe Ctrl (12340 ): 9293 I/Os completed (+3671) 00:09:21.757 QEMU NVMe Ctrl (12341 ): 9081 I/Os completed (+3675) 00:09:21.757 00:09:23.130 QEMU NVMe Ctrl (12340 ): 12948 I/Os completed (+3655) 00:09:23.130 QEMU NVMe Ctrl (12341 ): 12756 I/Os completed (+3675) 00:09:23.130 00:09:24.065 QEMU NVMe Ctrl (12340 ): 16621 I/Os completed (+3673) 00:09:24.065 QEMU NVMe Ctrl (12341 ): 16435 I/Os completed (+3679) 00:09:24.065 00:09:25.000 QEMU NVMe Ctrl (12340 ): 20254 I/Os completed (+3633) 00:09:25.000 QEMU NVMe Ctrl (12341 ): 20065 I/Os completed (+3630) 00:09:25.000 00:09:25.934 QEMU NVMe Ctrl (12340 ): 23541 I/Os completed (+3287) 00:09:25.935 QEMU NVMe Ctrl (12341 ): 23346 I/Os completed (+3281) 00:09:25.935 00:09:26.927 QEMU NVMe Ctrl (12340 ): 27187 I/Os completed (+3646) 00:09:26.927 QEMU NVMe Ctrl (12341 ): 27014 I/Os completed (+3668) 00:09:26.927 00:09:27.859 QEMU NVMe Ctrl (12340 ): 30832 I/Os completed (+3645) 00:09:27.859 QEMU NVMe Ctrl (12341 ): 30669 I/Os completed (+3655) 00:09:27.859 00:09:28.793 QEMU NVMe Ctrl (12340 ): 34477 I/Os completed (+3645) 00:09:28.793 QEMU NVMe Ctrl (12341 ): 34322 I/Os completed (+3653) 00:09:28.793 00:09:30.169 QEMU NVMe Ctrl (12340 ): 38159 I/Os completed (+3682) 00:09:30.169 QEMU NVMe Ctrl (12341 ): 37994 I/Os completed (+3672) 00:09:30.169 00:09:31.104 QEMU NVMe Ctrl (12340 ): 41821 I/Os completed (+3662) 00:09:31.104 QEMU NVMe Ctrl (12341 ): 41673 I/Os completed (+3679) 00:09:31.104 00:09:31.362 17:13:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:31.362 17:13:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:31.362 17:13:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:31.362 17:13:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:31.362 [2024-10-30 17:13:14.139948] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:31.362 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:31.362 [2024-10-30 17:13:14.140910] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.362 [2024-10-30 17:13:14.140955] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.362 [2024-10-30 17:13:14.140971] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.362 [2024-10-30 17:13:14.140986] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.362 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:31.362 [2024-10-30 17:13:14.142608] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.362 [2024-10-30 17:13:14.142647] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.362 [2024-10-30 17:13:14.142662] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.362 [2024-10-30 17:13:14.142674] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.362 17:13:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:31.362 17:13:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:31.362 [2024-10-30 17:13:14.159370] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:31.362 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:31.362 [2024-10-30 17:13:14.160231] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.362 [2024-10-30 17:13:14.160268] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.362 [2024-10-30 17:13:14.160285] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.362 [2024-10-30 17:13:14.160297] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.362 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:31.362 [2024-10-30 17:13:14.161639] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.362 [2024-10-30 17:13:14.161672] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.362 [2024-10-30 17:13:14.161688] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.362 [2024-10-30 17:13:14.161699] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.362 17:13:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:31.362 17:13:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:31.362 17:13:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:31.362 17:13:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:31.362 17:13:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:31.362 17:13:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:31.362 17:13:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:31.362 17:13:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:31.362 17:13:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:31.362 17:13:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:31.362 Attaching to 0000:00:10.0 00:09:31.362 Attached to 0000:00:10.0 00:09:31.620 17:13:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:31.620 17:13:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:31.620 17:13:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:31.620 Attaching to 0000:00:11.0 00:09:31.620 Attached to 0000:00:11.0 00:09:31.620 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:31.620 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:31.620 [2024-10-30 17:13:14.398385] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:09:43.815 17:13:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:43.815 17:13:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:43.815 17:13:26 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.86 00:09:43.815 17:13:26 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.86 00:09:43.815 17:13:26 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:09:43.815 17:13:26 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.86 00:09:43.815 17:13:26 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.86 2 00:09:43.815 remove_attach_helper took 42.86s to complete (handling 2 nvme drive(s)) 17:13:26 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:09:50.443 17:13:32 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66524 00:09:50.443 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66524) - No such process 00:09:50.443 17:13:32 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66524 00:09:50.443 17:13:32 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:09:50.443 17:13:32 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:09:50.443 17:13:32 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:09:50.443 17:13:32 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67067 00:09:50.443 17:13:32 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:50.443 17:13:32 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:09:50.443 17:13:32 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67067 00:09:50.443 17:13:32 sw_hotplug -- common/autotest_common.sh@833 -- # '[' -z 67067 ']' 00:09:50.443 17:13:32 sw_hotplug -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.443 17:13:32 sw_hotplug -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:50.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.443 17:13:32 sw_hotplug -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.443 17:13:32 sw_hotplug -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:50.443 17:13:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:50.443 [2024-10-30 17:13:32.476834] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:09:50.443 [2024-10-30 17:13:32.476958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67067 ] 00:09:50.443 [2024-10-30 17:13:32.636253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.443 [2024-10-30 17:13:32.733666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.443 17:13:33 sw_hotplug -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:50.443 17:13:33 sw_hotplug -- common/autotest_common.sh@866 -- # return 0 00:09:50.443 17:13:33 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:09:50.443 17:13:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.443 17:13:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:50.443 17:13:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.443 17:13:33 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:09:50.443 17:13:33 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:50.443 17:13:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:09:50.443 17:13:33 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:50.443 17:13:33 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:50.443 17:13:33 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:50.443 17:13:33 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:50.443 17:13:33 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:09:50.443 17:13:33 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:50.443 17:13:33 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:50.443 17:13:33 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:09:50.443 17:13:33 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:50.443 17:13:33 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:57.003 17:13:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:09:57.003 17:13:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:57.003 17:13:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:09:57.003 [2024-10-30 17:13:39.413560] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:57.003 [2024-10-30 17:13:39.414769] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 [2024-10-30 17:13:39.414806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:57.003 [2024-10-30 17:13:39.414818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:57.003 [2024-10-30 17:13:39.414835] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 [2024-10-30 17:13:39.414842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:57.003 [2024-10-30 17:13:39.414850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:57.003 [2024-10-30 17:13:39.414857] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 [2024-10-30 17:13:39.414865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:57.003 [2024-10-30 17:13:39.414871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:57.003 [2024-10-30 17:13:39.414882] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 [2024-10-30 17:13:39.414889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:57.003 [2024-10-30 17:13:39.414896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:57.003 [2024-10-30 17:13:39.813549] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:57.003 [2024-10-30 17:13:39.814784] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 [2024-10-30 17:13:39.814817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:57.003 [2024-10-30 17:13:39.814829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:57.003 [2024-10-30 17:13:39.814844] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 [2024-10-30 17:13:39.814854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:57.003 [2024-10-30 17:13:39.814861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:57.003 [2024-10-30 17:13:39.814869] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 [2024-10-30 17:13:39.814876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:57.003 [2024-10-30 17:13:39.814883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:57.003 [2024-10-30 17:13:39.814890] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 [2024-10-30 17:13:39.814898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:57.003 [2024-10-30 17:13:39.814904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:57.003 17:13:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.003 17:13:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:57.003 17:13:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:09:57.003 17:13:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:57.262 17:13:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:57.262 17:13:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:57.262 17:13:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:57.262 17:13:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:57.262 17:13:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:57.262 17:13:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:57.262 17:13:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:57.262 17:13:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:57.262 17:13:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:57.262 17:13:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:57.262 17:13:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:09.462 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:09.462 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:09.462 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:09.462 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:09.462 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:09.462 17:13:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.462 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:09.462 17:13:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:09.462 17:13:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.462 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:09.462 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:09.462 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:09.462 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:09.462 [2024-10-30 17:13:52.213775] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:09.462 [2024-10-30 17:13:52.215141] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.462 [2024-10-30 17:13:52.215176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:09.462 [2024-10-30 17:13:52.215186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.462 [2024-10-30 17:13:52.215212] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.462 [2024-10-30 17:13:52.215221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:09.462 [2024-10-30 17:13:52.215230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.462 [2024-10-30 17:13:52.215238] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.462 [2024-10-30 17:13:52.215246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:09.462 [2024-10-30 17:13:52.215252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.462 [2024-10-30 17:13:52.215260] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.462 [2024-10-30 17:13:52.215267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:09.462 [2024-10-30 17:13:52.215274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.462 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:09.462 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:09.462 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:09.462 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:09.462 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:09.462 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:09.462 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:09.462 17:13:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.462 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:09.462 17:13:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:09.462 17:13:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.462 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:09.462 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:10.032 [2024-10-30 17:13:52.713774] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:10.032 [2024-10-30 17:13:52.714915] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:10.032 [2024-10-30 17:13:52.714946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.033 [2024-10-30 17:13:52.714958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.033 [2024-10-30 17:13:52.714972] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:10.033 [2024-10-30 17:13:52.714980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.033 [2024-10-30 17:13:52.714987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.033 [2024-10-30 17:13:52.714996] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:10.033 [2024-10-30 17:13:52.715002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.033 [2024-10-30 17:13:52.715010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.033 [2024-10-30 17:13:52.715017] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:10.033 [2024-10-30 17:13:52.715025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.033 [2024-10-30 17:13:52.715031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.033 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:10.033 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:10.033 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:10.033 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:10.033 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:10.033 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:10.033 17:13:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.033 17:13:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:10.033 17:13:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.033 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:10.033 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:10.033 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:10.033 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:10.033 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:10.033 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:10.033 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:10.033 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:10.033 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:10.033 17:13:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:10.307 17:13:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:10.307 17:13:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:10.307 17:13:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:22.632 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:22.632 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:22.632 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:22.632 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:22.632 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:22.632 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:22.632 17:14:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.632 17:14:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:22.632 17:14:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.632 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:22.632 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:22.632 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:22.632 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:22.632 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:22.632 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:22.632 [2024-10-30 17:14:05.113985] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:22.632 [2024-10-30 17:14:05.115420] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.632 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:22.632 [2024-10-30 17:14:05.115515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:22.632 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.632 [2024-10-30 17:14:05.115606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.632 [2024-10-30 17:14:05.115679] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.632 [2024-10-30 17:14:05.115699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.632 [2024-10-30 17:14:05.115752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.632 [2024-10-30 17:14:05.115777] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.632 [2024-10-30 17:14:05.115828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.632 [2024-10-30 17:14:05.115854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.632 [2024-10-30 17:14:05.115881] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.632 [2024-10-30 17:14:05.115897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.632 [2024-10-30 17:14:05.115920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.632 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:22.632 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:22.632 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:22.632 17:14:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.632 17:14:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:22.632 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:22.632 17:14:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.632 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:22.632 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:22.632 [2024-10-30 17:14:05.513986] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:22.632 [2024-10-30 17:14:05.515166] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.632 [2024-10-30 17:14:05.515196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.632 [2024-10-30 17:14:05.515220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.632 [2024-10-30 17:14:05.515234] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.632 [2024-10-30 17:14:05.515243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.632 [2024-10-30 17:14:05.515250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.632 [2024-10-30 17:14:05.515259] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.632 [2024-10-30 17:14:05.515265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.632 [2024-10-30 17:14:05.515274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.632 [2024-10-30 17:14:05.515282] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.632 [2024-10-30 17:14:05.515289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.632 [2024-10-30 17:14:05.515295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.906 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:22.906 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:22.906 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:22.906 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:22.906 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:22.906 17:14:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.906 17:14:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:22.906 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:22.906 17:14:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.906 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:22.906 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:22.906 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:22.906 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:22.906 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:22.906 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:22.906 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:22.906 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:22.906 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:22.906 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:23.167 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:23.167 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:23.167 17:14:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:35.394 17:14:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:35.394 17:14:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:35.394 17:14:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:35.394 17:14:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:35.394 17:14:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:35.394 17:14:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:35.394 17:14:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.394 17:14:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:35.394 17:14:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.394 17:14:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:35.394 17:14:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:35.394 17:14:17 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.63 00:10:35.394 17:14:17 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.63 00:10:35.394 17:14:17 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:35.394 17:14:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.63 00:10:35.394 17:14:17 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.63 2 00:10:35.394 remove_attach_helper took 44.63s to complete (handling 2 nvme drive(s)) 17:14:17 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:10:35.394 17:14:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.394 17:14:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:35.394 17:14:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.394 17:14:17 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:35.394 17:14:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.394 17:14:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:35.394 17:14:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.394 17:14:17 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:10:35.394 17:14:17 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:35.394 17:14:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:35.394 17:14:17 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:35.394 17:14:17 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:35.394 17:14:17 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:35.394 17:14:17 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:35.394 17:14:17 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:35.394 17:14:17 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:35.394 17:14:17 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:35.394 17:14:17 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:35.394 17:14:17 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:35.394 17:14:17 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:41.988 17:14:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:41.988 17:14:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:41.988 17:14:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:41.988 17:14:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:41.988 17:14:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:41.988 17:14:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:41.988 17:14:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:41.988 17:14:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:41.988 17:14:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:41.988 17:14:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:41.988 17:14:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:41.988 17:14:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.988 17:14:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:41.988 17:14:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.988 17:14:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:41.988 17:14:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:41.988 [2024-10-30 17:14:24.075624] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:41.988 [2024-10-30 17:14:24.076660] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.988 [2024-10-30 17:14:24.076762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.988 [2024-10-30 17:14:24.076835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.988 [2024-10-30 17:14:24.076912] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.989 [2024-10-30 17:14:24.076932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.989 [2024-10-30 17:14:24.076996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.989 [2024-10-30 17:14:24.077024] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.989 [2024-10-30 17:14:24.077042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.989 [2024-10-30 17:14:24.077092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.989 [2024-10-30 17:14:24.077146] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.989 [2024-10-30 17:14:24.077165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.989 [2024-10-30 17:14:24.077255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.989 17:14:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:41.989 17:14:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:41.989 17:14:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:41.989 17:14:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:41.989 17:14:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:41.989 17:14:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:41.989 17:14:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.989 17:14:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:41.989 [2024-10-30 17:14:24.575626] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:41.989 [2024-10-30 17:14:24.576647] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.989 [2024-10-30 17:14:24.576743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.989 [2024-10-30 17:14:24.576805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.989 [2024-10-30 17:14:24.576868] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.989 [2024-10-30 17:14:24.576888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.989 [2024-10-30 17:14:24.576937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.989 [2024-10-30 17:14:24.576965] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.989 [2024-10-30 17:14:24.576981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.989 [2024-10-30 17:14:24.577129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.989 [2024-10-30 17:14:24.577154] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.989 [2024-10-30 17:14:24.577171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.989 [2024-10-30 17:14:24.577194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.989 17:14:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.989 17:14:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:41.989 17:14:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:42.251 17:14:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:42.251 17:14:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:42.251 17:14:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:42.251 17:14:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:42.251 17:14:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:42.251 17:14:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:42.251 17:14:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.251 17:14:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:42.251 17:14:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.251 17:14:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:42.251 17:14:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:42.251 17:14:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:42.251 17:14:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:42.251 17:14:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:42.512 17:14:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:42.512 17:14:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:42.512 17:14:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:42.512 17:14:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:42.512 17:14:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:42.512 17:14:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:42.512 17:14:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:42.512 17:14:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:54.730 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:54.730 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:54.730 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:54.730 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:54.730 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:54.730 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:54.730 17:14:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.730 17:14:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.730 17:14:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.730 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:54.730 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:54.730 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:54.730 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:54.730 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:54.730 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:54.730 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:54.730 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:54.730 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:54.730 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:54.730 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:54.730 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:54.730 17:14:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.730 17:14:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.730 17:14:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.730 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:54.730 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:54.730 [2024-10-30 17:14:37.475849] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:54.730 [2024-10-30 17:14:37.476755] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.730 [2024-10-30 17:14:37.476787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.730 [2024-10-30 17:14:37.476797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.730 [2024-10-30 17:14:37.476813] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.730 [2024-10-30 17:14:37.476820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.730 [2024-10-30 17:14:37.476828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.730 [2024-10-30 17:14:37.476835] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.730 [2024-10-30 17:14:37.476843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.730 [2024-10-30 17:14:37.476850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.730 [2024-10-30 17:14:37.476858] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.730 [2024-10-30 17:14:37.476864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.730 [2024-10-30 17:14:37.476872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.992 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:54.992 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:54.992 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:54.992 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:54.992 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:54.992 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:54.992 17:14:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.992 17:14:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.992 17:14:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.253 [2024-10-30 17:14:37.975854] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:55.253 [2024-10-30 17:14:37.977011] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.253 [2024-10-30 17:14:37.977040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.253 [2024-10-30 17:14:37.977052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.253 [2024-10-30 17:14:37.977063] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.253 [2024-10-30 17:14:37.977073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.253 [2024-10-30 17:14:37.977080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.253 [2024-10-30 17:14:37.977089] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.253 [2024-10-30 17:14:37.977096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.253 [2024-10-30 17:14:37.977104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.253 [2024-10-30 17:14:37.977110] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.253 [2024-10-30 17:14:37.977118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.253 [2024-10-30 17:14:37.977125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.253 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:55.253 17:14:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:55.513 17:14:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:55.513 17:14:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:55.513 17:14:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:55.513 17:14:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:55.513 17:14:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:55.513 17:14:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:55.513 17:14:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.513 17:14:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:55.772 17:14:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.772 17:14:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:55.772 17:14:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:55.772 17:14:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:55.772 17:14:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:55.772 17:14:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:55.772 17:14:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:55.772 17:14:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:55.772 17:14:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:55.772 17:14:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:55.772 17:14:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:55.772 17:14:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:56.031 17:14:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:56.031 17:14:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:08.243 17:14:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:08.243 17:14:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:08.243 17:14:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:08.243 17:14:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:08.243 17:14:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:08.243 17:14:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:08.243 17:14:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.243 17:14:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:08.243 17:14:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.243 17:14:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:08.243 17:14:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:08.243 17:14:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:08.243 17:14:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:08.243 17:14:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:08.243 17:14:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:08.243 17:14:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:08.243 17:14:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:08.243 17:14:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:08.243 17:14:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:08.243 17:14:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:08.243 17:14:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:08.243 17:14:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.243 17:14:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:08.243 17:14:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.243 17:14:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:08.243 17:14:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:08.243 [2024-10-30 17:14:50.876401] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:08.243 [2024-10-30 17:14:50.877525] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.243 [2024-10-30 17:14:50.877628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.243 [2024-10-30 17:14:50.877642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.243 [2024-10-30 17:14:50.877658] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.243 [2024-10-30 17:14:50.877665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.244 [2024-10-30 17:14:50.877674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.244 [2024-10-30 17:14:50.877681] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.244 [2024-10-30 17:14:50.877693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.244 [2024-10-30 17:14:50.877699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.244 [2024-10-30 17:14:50.877708] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.244 [2024-10-30 17:14:50.877714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.244 [2024-10-30 17:14:50.877730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.502 [2024-10-30 17:14:51.276394] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:08.502 [2024-10-30 17:14:51.277665] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.502 [2024-10-30 17:14:51.277694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.502 [2024-10-30 17:14:51.277706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.502 [2024-10-30 17:14:51.277733] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.502 [2024-10-30 17:14:51.277742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.502 [2024-10-30 17:14:51.277750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.502 [2024-10-30 17:14:51.277758] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.502 [2024-10-30 17:14:51.277765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.502 [2024-10-30 17:14:51.277774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.502 [2024-10-30 17:14:51.277781] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.503 [2024-10-30 17:14:51.277791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.503 [2024-10-30 17:14:51.277797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.503 17:14:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:08.503 17:14:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:08.503 17:14:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:08.503 17:14:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:08.503 17:14:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:08.503 17:14:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:08.503 17:14:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.503 17:14:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:08.503 17:14:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.503 17:14:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:08.503 17:14:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:08.503 17:14:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:08.503 17:14:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:08.503 17:14:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:08.761 17:14:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:08.761 17:14:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:08.761 17:14:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:08.761 17:14:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:08.761 17:14:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:08.761 17:14:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:08.761 17:14:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:08.761 17:14:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:20.978 17:15:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:20.978 17:15:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:20.978 17:15:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:20.978 17:15:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:20.978 17:15:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:20.978 17:15:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:20.978 17:15:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.978 17:15:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.978 17:15:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.978 17:15:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:20.978 17:15:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:20.978 17:15:03 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.71 00:11:20.978 17:15:03 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.71 00:11:20.978 17:15:03 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:20.978 17:15:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.71 00:11:20.978 17:15:03 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.71 2 00:11:20.978 remove_attach_helper took 45.71s to complete (handling 2 nvme drive(s)) 17:15:03 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:20.978 17:15:03 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67067 00:11:20.978 17:15:03 sw_hotplug -- common/autotest_common.sh@952 -- # '[' -z 67067 ']' 00:11:20.978 17:15:03 sw_hotplug -- common/autotest_common.sh@956 -- # kill -0 67067 00:11:20.978 17:15:03 sw_hotplug -- common/autotest_common.sh@957 -- # uname 00:11:20.978 17:15:03 sw_hotplug -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:20.978 17:15:03 sw_hotplug -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67067 00:11:20.978 killing process with pid 67067 00:11:20.978 17:15:03 sw_hotplug -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:20.978 17:15:03 sw_hotplug -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:20.978 17:15:03 sw_hotplug -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67067' 00:11:20.978 17:15:03 sw_hotplug -- common/autotest_common.sh@971 -- # kill 67067 00:11:20.978 17:15:03 sw_hotplug -- common/autotest_common.sh@976 -- # wait 67067 00:11:21.952 17:15:04 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:22.229 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:22.802 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:22.802 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:22.802 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:23.064 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:23.064 00:11:23.064 real 2m29.567s 00:11:23.064 user 1m51.591s 00:11:23.064 sys 0m16.704s 00:11:23.064 17:15:05 sw_hotplug -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:23.064 ************************************ 00:11:23.064 END TEST sw_hotplug 00:11:23.064 ************************************ 00:11:23.064 17:15:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.064 17:15:05 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:23.064 17:15:05 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:23.064 17:15:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:23.064 17:15:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:23.064 17:15:05 -- common/autotest_common.sh@10 -- # set +x 00:11:23.064 ************************************ 00:11:23.064 START TEST nvme_xnvme 00:11:23.064 ************************************ 00:11:23.064 17:15:05 nvme_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:23.064 * Looking for test storage... 00:11:23.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:23.064 17:15:05 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:23.064 17:15:05 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:11:23.064 17:15:05 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:23.326 17:15:06 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:23.326 17:15:06 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.326 17:15:06 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:23.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.326 --rc genhtml_branch_coverage=1 00:11:23.326 --rc genhtml_function_coverage=1 00:11:23.326 --rc genhtml_legend=1 00:11:23.326 --rc geninfo_all_blocks=1 00:11:23.326 --rc geninfo_unexecuted_blocks=1 00:11:23.326 00:11:23.326 ' 00:11:23.326 17:15:06 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:23.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.326 --rc genhtml_branch_coverage=1 00:11:23.326 --rc genhtml_function_coverage=1 00:11:23.326 --rc genhtml_legend=1 00:11:23.326 --rc geninfo_all_blocks=1 00:11:23.326 --rc geninfo_unexecuted_blocks=1 00:11:23.326 00:11:23.326 ' 00:11:23.326 17:15:06 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:23.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.326 --rc genhtml_branch_coverage=1 00:11:23.326 --rc genhtml_function_coverage=1 00:11:23.326 --rc genhtml_legend=1 00:11:23.326 --rc geninfo_all_blocks=1 00:11:23.326 --rc geninfo_unexecuted_blocks=1 00:11:23.326 00:11:23.326 ' 00:11:23.326 17:15:06 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:23.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.326 --rc genhtml_branch_coverage=1 00:11:23.326 --rc genhtml_function_coverage=1 00:11:23.326 --rc genhtml_legend=1 00:11:23.326 --rc geninfo_all_blocks=1 00:11:23.326 --rc geninfo_unexecuted_blocks=1 00:11:23.326 00:11:23.326 ' 00:11:23.326 17:15:06 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.326 17:15:06 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.326 17:15:06 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.326 17:15:06 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.326 17:15:06 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.326 17:15:06 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:23.326 17:15:06 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.326 17:15:06 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:23.326 17:15:06 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:23.326 17:15:06 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:23.326 17:15:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:23.326 ************************************ 00:11:23.326 START TEST xnvme_to_malloc_dd_copy 00:11:23.326 ************************************ 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1127 -- # malloc_to_xnvme_copy 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:23.326 17:15:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:23.326 { 00:11:23.326 "subsystems": [ 00:11:23.326 { 00:11:23.326 "subsystem": "bdev", 00:11:23.326 "config": [ 00:11:23.326 { 00:11:23.326 "params": { 00:11:23.326 "block_size": 512, 00:11:23.326 "num_blocks": 2097152, 00:11:23.326 "name": "malloc0" 00:11:23.326 }, 00:11:23.326 "method": "bdev_malloc_create" 00:11:23.326 }, 00:11:23.326 { 00:11:23.326 "params": { 00:11:23.326 "io_mechanism": "libaio", 00:11:23.326 "filename": "/dev/nullb0", 00:11:23.326 "name": "null0" 00:11:23.326 }, 00:11:23.326 "method": "bdev_xnvme_create" 00:11:23.326 }, 00:11:23.327 { 00:11:23.327 "method": "bdev_wait_for_examine" 00:11:23.327 } 00:11:23.327 ] 00:11:23.327 } 00:11:23.327 ] 00:11:23.327 } 00:11:23.327 [2024-10-30 17:15:06.181445] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:11:23.327 [2024-10-30 17:15:06.182029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68449 ] 00:11:23.588 [2024-10-30 17:15:06.348471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.588 [2024-10-30 17:15:06.468646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.136  [2024-10-30T17:15:09.690Z] Copying: 224/1024 [MB] (224 MBps) [2024-10-30T17:15:10.629Z] Copying: 449/1024 [MB] (224 MBps) [2024-10-30T17:15:11.567Z] Copying: 744/1024 [MB] (295 MBps) [2024-10-30T17:15:13.484Z] Copying: 1024/1024 [MB] (average 260 MBps) 00:11:30.503 00:11:30.503 17:15:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:30.503 17:15:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:30.503 17:15:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:30.503 17:15:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:30.503 { 00:11:30.503 "subsystems": [ 00:11:30.503 { 00:11:30.503 "subsystem": "bdev", 00:11:30.503 "config": [ 00:11:30.503 { 00:11:30.503 "params": { 00:11:30.503 "block_size": 512, 00:11:30.503 "num_blocks": 2097152, 00:11:30.503 "name": "malloc0" 00:11:30.503 }, 00:11:30.503 "method": "bdev_malloc_create" 00:11:30.503 }, 00:11:30.503 { 00:11:30.503 "params": { 00:11:30.503 "io_mechanism": "libaio", 00:11:30.503 "filename": "/dev/nullb0", 00:11:30.503 "name": "null0" 00:11:30.503 }, 00:11:30.503 "method": "bdev_xnvme_create" 00:11:30.503 }, 00:11:30.503 { 00:11:30.503 "method": "bdev_wait_for_examine" 00:11:30.503 } 00:11:30.503 ] 00:11:30.503 } 00:11:30.503 ] 00:11:30.503 } 00:11:30.503 [2024-10-30 17:15:13.477161] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:11:30.503 [2024-10-30 17:15:13.477297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68538 ] 00:11:30.764 [2024-10-30 17:15:13.634061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.764 [2024-10-30 17:15:13.715226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.701  [2024-10-30T17:15:16.629Z] Copying: 304/1024 [MB] (304 MBps) [2024-10-30T17:15:17.572Z] Copying: 609/1024 [MB] (304 MBps) [2024-10-30T17:15:17.832Z] Copying: 914/1024 [MB] (305 MBps) [2024-10-30T17:15:19.750Z] Copying: 1024/1024 [MB] (average 304 MBps) 00:11:36.769 00:11:36.769 17:15:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:36.769 17:15:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:36.769 17:15:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:36.769 17:15:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:37.030 17:15:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:37.030 17:15:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:37.030 { 00:11:37.030 "subsystems": [ 00:11:37.030 { 00:11:37.030 "subsystem": "bdev", 00:11:37.030 "config": [ 00:11:37.030 { 00:11:37.030 "params": { 00:11:37.030 "block_size": 512, 00:11:37.030 "num_blocks": 2097152, 00:11:37.030 "name": "malloc0" 00:11:37.030 }, 00:11:37.030 "method": "bdev_malloc_create" 00:11:37.030 }, 00:11:37.030 { 00:11:37.030 "params": { 00:11:37.030 "io_mechanism": "io_uring", 00:11:37.030 "filename": "/dev/nullb0", 00:11:37.030 "name": "null0" 00:11:37.030 }, 00:11:37.030 "method": "bdev_xnvme_create" 00:11:37.030 }, 00:11:37.030 { 00:11:37.030 "method": "bdev_wait_for_examine" 00:11:37.030 } 00:11:37.030 ] 00:11:37.030 } 00:11:37.030 ] 00:11:37.030 } 00:11:37.030 [2024-10-30 17:15:19.814978] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:11:37.030 [2024-10-30 17:15:19.815091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68620 ] 00:11:37.030 [2024-10-30 17:15:19.976183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.289 [2024-10-30 17:15:20.108046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.207  [2024-10-30T17:15:23.574Z] Copying: 239/1024 [MB] (239 MBps) [2024-10-30T17:15:24.517Z] Copying: 550/1024 [MB] (310 MBps) [2024-10-30T17:15:24.777Z] Copying: 860/1024 [MB] (310 MBps) [2024-10-30T17:15:26.690Z] Copying: 1024/1024 [MB] (average 290 MBps) 00:11:43.710 00:11:43.710 17:15:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:43.710 17:15:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:43.710 17:15:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:43.710 17:15:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:43.710 { 00:11:43.710 "subsystems": [ 00:11:43.710 { 00:11:43.710 "subsystem": "bdev", 00:11:43.710 "config": [ 00:11:43.710 { 00:11:43.710 "params": { 00:11:43.710 "block_size": 512, 00:11:43.710 "num_blocks": 2097152, 00:11:43.710 "name": "malloc0" 00:11:43.710 }, 00:11:43.710 "method": "bdev_malloc_create" 00:11:43.710 }, 00:11:43.710 { 00:11:43.710 "params": { 00:11:43.710 "io_mechanism": "io_uring", 00:11:43.710 "filename": "/dev/nullb0", 00:11:43.710 "name": "null0" 00:11:43.710 }, 00:11:43.710 "method": "bdev_xnvme_create" 00:11:43.710 }, 00:11:43.710 { 00:11:43.710 "method": "bdev_wait_for_examine" 00:11:43.710 } 00:11:43.710 ] 00:11:43.710 } 00:11:43.710 ] 00:11:43.710 } 00:11:43.710 [2024-10-30 17:15:26.680445] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:11:43.710 [2024-10-30 17:15:26.681088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68696 ] 00:11:43.971 [2024-10-30 17:15:26.840319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.232 [2024-10-30 17:15:26.956312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.147  [2024-10-30T17:15:30.070Z] Copying: 236/1024 [MB] (236 MBps) [2024-10-30T17:15:31.455Z] Copying: 535/1024 [MB] (299 MBps) [2024-10-30T17:15:31.717Z] Copying: 850/1024 [MB] (315 MBps) [2024-10-30T17:15:33.630Z] Copying: 1024/1024 [MB] (average 288 MBps) 00:11:50.649 00:11:50.649 17:15:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:11:50.649 17:15:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:11:50.649 ************************************ 00:11:50.649 END TEST xnvme_to_malloc_dd_copy 00:11:50.649 ************************************ 00:11:50.649 00:11:50.649 real 0m27.401s 00:11:50.649 user 0m23.726s 00:11:50.649 sys 0m3.147s 00:11:50.649 17:15:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:50.649 17:15:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:50.649 17:15:33 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:50.649 17:15:33 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:50.649 17:15:33 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:50.649 17:15:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:50.649 ************************************ 00:11:50.649 START TEST xnvme_bdevperf 00:11:50.649 ************************************ 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1127 -- # xnvme_bdevperf 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:11:50.649 17:15:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:50.649 { 00:11:50.649 "subsystems": [ 00:11:50.649 { 00:11:50.649 "subsystem": "bdev", 00:11:50.649 "config": [ 00:11:50.649 { 00:11:50.649 "params": { 00:11:50.649 "io_mechanism": "libaio", 00:11:50.649 "filename": "/dev/nullb0", 00:11:50.649 "name": "null0" 00:11:50.649 }, 00:11:50.649 "method": "bdev_xnvme_create" 00:11:50.649 }, 00:11:50.649 { 00:11:50.649 "method": "bdev_wait_for_examine" 00:11:50.649 } 00:11:50.649 ] 00:11:50.649 } 00:11:50.649 ] 00:11:50.649 } 00:11:50.910 [2024-10-30 17:15:33.639157] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:11:50.910 [2024-10-30 17:15:33.639509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68806 ] 00:11:50.910 [2024-10-30 17:15:33.799439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.910 [2024-10-30 17:15:33.874846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.170 Running I/O for 5 seconds... 00:11:53.496 199744.00 IOPS, 780.25 MiB/s [2024-10-30T17:15:37.420Z] 200064.00 IOPS, 781.50 MiB/s [2024-10-30T17:15:38.363Z] 200149.33 IOPS, 781.83 MiB/s [2024-10-30T17:15:39.308Z] 200096.00 IOPS, 781.62 MiB/s [2024-10-30T17:15:39.308Z] 200076.80 IOPS, 781.55 MiB/s 00:11:56.327 Latency(us) 00:11:56.327 [2024-10-30T17:15:39.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.327 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:56.327 null0 : 5.00 200014.35 781.31 0.00 0.00 317.79 311.93 1562.78 00:11:56.327 [2024-10-30T17:15:39.308Z] =================================================================================================================== 00:11:56.327 [2024-10-30T17:15:39.308Z] Total : 200014.35 781.31 0.00 0.00 317.79 311.93 1562.78 00:11:56.899 17:15:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:56.899 17:15:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:56.899 17:15:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:56.899 17:15:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:56.899 17:15:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:11:56.899 17:15:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:56.899 { 00:11:56.899 "subsystems": [ 00:11:56.899 { 00:11:56.899 "subsystem": "bdev", 00:11:56.899 "config": [ 00:11:56.899 { 00:11:56.899 "params": { 00:11:56.899 "io_mechanism": "io_uring", 00:11:56.899 "filename": "/dev/nullb0", 00:11:56.899 "name": "null0" 00:11:56.899 }, 00:11:56.899 "method": "bdev_xnvme_create" 00:11:56.899 }, 00:11:56.899 { 00:11:56.899 "method": "bdev_wait_for_examine" 00:11:56.899 } 00:11:56.899 ] 00:11:56.899 } 00:11:56.899 ] 00:11:56.899 } 00:11:56.899 [2024-10-30 17:15:39.712962] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:11:56.899 [2024-10-30 17:15:39.713222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68880 ] 00:11:56.899 [2024-10-30 17:15:39.870973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.158 [2024-10-30 17:15:39.945011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.158 Running I/O for 5 seconds... 00:11:59.479 231616.00 IOPS, 904.75 MiB/s [2024-10-30T17:15:43.402Z] 231264.00 IOPS, 903.38 MiB/s [2024-10-30T17:15:44.341Z] 231168.00 IOPS, 903.00 MiB/s [2024-10-30T17:15:45.282Z] 231216.00 IOPS, 903.19 MiB/s 00:12:02.301 Latency(us) 00:12:02.301 [2024-10-30T17:15:45.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.301 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:02.301 null0 : 5.00 231218.21 903.20 0.00 0.00 274.32 145.72 2092.11 00:12:02.301 [2024-10-30T17:15:45.282Z] =================================================================================================================== 00:12:02.301 [2024-10-30T17:15:45.282Z] Total : 231218.21 903.20 0.00 0.00 274.32 145.72 2092.11 00:12:02.873 17:15:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:02.873 17:15:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:02.873 ************************************ 00:12:02.873 END TEST xnvme_bdevperf 00:12:02.873 ************************************ 00:12:02.873 00:12:02.873 real 0m12.174s 00:12:02.873 user 0m9.750s 00:12:02.873 sys 0m2.191s 00:12:02.873 17:15:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:02.873 17:15:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:02.873 ************************************ 00:12:02.873 END TEST nvme_xnvme 00:12:02.873 ************************************ 00:12:02.873 00:12:02.873 real 0m39.863s 00:12:02.873 user 0m33.600s 00:12:02.873 sys 0m5.456s 00:12:02.873 17:15:45 nvme_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:02.873 17:15:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:02.873 17:15:45 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:02.873 17:15:45 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:02.873 17:15:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:02.873 17:15:45 -- common/autotest_common.sh@10 -- # set +x 00:12:02.873 ************************************ 00:12:02.873 START TEST blockdev_xnvme 00:12:02.873 ************************************ 00:12:02.873 17:15:45 blockdev_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:03.137 * Looking for test storage... 00:12:03.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:03.138 17:15:45 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:03.138 17:15:45 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:12:03.138 17:15:45 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:03.138 17:15:45 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.138 17:15:45 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:03.138 17:15:45 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.138 17:15:45 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:03.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.138 --rc genhtml_branch_coverage=1 00:12:03.138 --rc genhtml_function_coverage=1 00:12:03.138 --rc genhtml_legend=1 00:12:03.138 --rc geninfo_all_blocks=1 00:12:03.138 --rc geninfo_unexecuted_blocks=1 00:12:03.138 00:12:03.138 ' 00:12:03.138 17:15:45 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:03.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.138 --rc genhtml_branch_coverage=1 00:12:03.138 --rc genhtml_function_coverage=1 00:12:03.138 --rc genhtml_legend=1 00:12:03.138 --rc geninfo_all_blocks=1 00:12:03.138 --rc geninfo_unexecuted_blocks=1 00:12:03.138 00:12:03.138 ' 00:12:03.138 17:15:45 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:03.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.138 --rc genhtml_branch_coverage=1 00:12:03.138 --rc genhtml_function_coverage=1 00:12:03.138 --rc genhtml_legend=1 00:12:03.138 --rc geninfo_all_blocks=1 00:12:03.138 --rc geninfo_unexecuted_blocks=1 00:12:03.138 00:12:03.138 ' 00:12:03.138 17:15:45 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:03.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.138 --rc genhtml_branch_coverage=1 00:12:03.138 --rc genhtml_function_coverage=1 00:12:03.138 --rc genhtml_legend=1 00:12:03.138 --rc geninfo_all_blocks=1 00:12:03.138 --rc geninfo_unexecuted_blocks=1 00:12:03.138 00:12:03.138 ' 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:03.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69017 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69017 00:12:03.138 17:15:45 blockdev_xnvme -- common/autotest_common.sh@833 -- # '[' -z 69017 ']' 00:12:03.138 17:15:45 blockdev_xnvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.138 17:15:45 blockdev_xnvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:03.138 17:15:45 blockdev_xnvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.138 17:15:45 blockdev_xnvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:03.138 17:15:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:03.138 17:15:45 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:03.138 [2024-10-30 17:15:46.050939] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:12:03.138 [2024-10-30 17:15:46.051064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69017 ] 00:12:03.417 [2024-10-30 17:15:46.206819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.417 [2024-10-30 17:15:46.292579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.005 17:15:46 blockdev_xnvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:04.005 17:15:46 blockdev_xnvme -- common/autotest_common.sh@866 -- # return 0 00:12:04.005 17:15:46 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:04.005 17:15:46 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:04.005 17:15:46 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:04.005 17:15:46 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:04.005 17:15:46 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:04.267 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:04.528 Waiting for block devices as requested 00:12:04.528 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:04.528 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:04.528 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:04.790 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:10.085 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:10.085 nvme0n1 00:12:10.085 nvme1n1 00:12:10.085 nvme2n1 00:12:10.085 nvme2n2 00:12:10.085 nvme2n3 00:12:10.085 nvme3n1 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:10.085 17:15:52 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.085 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:10.086 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "46fe66e1-45ff-4114-b833-50d5456619c7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "46fe66e1-45ff-4114-b833-50d5456619c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "ed51db8c-9de1-458c-b747-e1d3ffa44371"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ed51db8c-9de1-458c-b747-e1d3ffa44371",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a09eb499-cbea-433c-80a6-b79bc08d1d74"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a09eb499-cbea-433c-80a6-b79bc08d1d74",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "5e7950d0-8f61-4bd0-8fa2-7ec42d6a1e04"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5e7950d0-8f61-4bd0-8fa2-7ec42d6a1e04",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:10.086 ' "66566266-1ed8-4a8f-8125-76b5f0724c7f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "66566266-1ed8-4a8f-8125-76b5f0724c7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "f655ef1b-51c9-4122-9562-748dce00822d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f655ef1b-51c9-4122-9562-748dce00822d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:10.086 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:10.086 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:10.086 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:10.086 17:15:52 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69017 00:12:10.086 17:15:52 blockdev_xnvme -- common/autotest_common.sh@952 -- # '[' -z 69017 ']' 00:12:10.086 17:15:52 blockdev_xnvme -- common/autotest_common.sh@956 -- # kill -0 69017 00:12:10.086 17:15:52 blockdev_xnvme -- common/autotest_common.sh@957 -- # uname 00:12:10.086 17:15:52 blockdev_xnvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:10.086 17:15:52 blockdev_xnvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69017 00:12:10.086 killing process with pid 69017 00:12:10.086 17:15:52 blockdev_xnvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:10.086 17:15:52 blockdev_xnvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:10.086 17:15:52 blockdev_xnvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69017' 00:12:10.086 17:15:52 blockdev_xnvme -- common/autotest_common.sh@971 -- # kill 69017 00:12:10.086 17:15:52 blockdev_xnvme -- common/autotest_common.sh@976 -- # wait 69017 00:12:11.027 17:15:53 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:11.027 17:15:53 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:11.027 17:15:53 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:12:11.027 17:15:53 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:11.027 17:15:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:11.027 ************************************ 00:12:11.027 START TEST bdev_hello_world 00:12:11.027 ************************************ 00:12:11.027 17:15:53 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:11.027 [2024-10-30 17:15:54.002929] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:12:11.027 [2024-10-30 17:15:54.003213] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69375 ] 00:12:11.286 [2024-10-30 17:15:54.160255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.286 [2024-10-30 17:15:54.243220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.546 [2024-10-30 17:15:54.524077] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:11.546 [2024-10-30 17:15:54.524248] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:11.546 [2024-10-30 17:15:54.524266] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:11.546 [2024-10-30 17:15:54.525729] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:11.546 [2024-10-30 17:15:54.526133] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:11.546 [2024-10-30 17:15:54.526155] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:11.546 [2024-10-30 17:15:54.526401] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:11.546 00:12:11.546 [2024-10-30 17:15:54.526419] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:12.118 00:12:12.118 real 0m1.124s 00:12:12.118 user 0m0.858s 00:12:12.118 sys 0m0.154s 00:12:12.118 17:15:55 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:12.118 17:15:55 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:12.118 ************************************ 00:12:12.118 END TEST bdev_hello_world 00:12:12.118 ************************************ 00:12:12.379 17:15:55 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:12.379 17:15:55 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:12.379 17:15:55 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:12.379 17:15:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:12.379 ************************************ 00:12:12.380 START TEST bdev_bounds 00:12:12.380 ************************************ 00:12:12.380 Process bdevio pid: 69406 00:12:12.380 17:15:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:12:12.380 17:15:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69406 00:12:12.380 17:15:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:12.380 17:15:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69406' 00:12:12.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.380 17:15:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69406 00:12:12.380 17:15:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 69406 ']' 00:12:12.380 17:15:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.380 17:15:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:12.380 17:15:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:12.380 17:15:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.380 17:15:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:12.380 17:15:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:12.380 [2024-10-30 17:15:55.198952] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:12:12.380 [2024-10-30 17:15:55.199089] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69406 ] 00:12:12.641 [2024-10-30 17:15:55.360264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:12.641 [2024-10-30 17:15:55.483056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.641 [2024-10-30 17:15:55.483375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.641 [2024-10-30 17:15:55.483400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.213 17:15:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:13.213 17:15:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:12:13.213 17:15:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:13.213 I/O targets: 00:12:13.213 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:13.213 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:13.213 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:13.213 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:13.214 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:13.214 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:13.214 00:12:13.214 00:12:13.214 CUnit - A unit testing framework for C - Version 2.1-3 00:12:13.214 http://cunit.sourceforge.net/ 00:12:13.214 00:12:13.214 00:12:13.214 Suite: bdevio tests on: nvme3n1 00:12:13.214 Test: blockdev write read block ...passed 00:12:13.214 Test: blockdev write zeroes read block ...passed 00:12:13.214 Test: blockdev write zeroes read no split ...passed 00:12:13.474 Test: blockdev write zeroes read split ...passed 00:12:13.474 Test: blockdev write zeroes read split partial ...passed 00:12:13.474 Test: blockdev reset ...passed 00:12:13.475 Test: blockdev write read 8 blocks ...passed 00:12:13.475 Test: blockdev write read size > 128k ...passed 00:12:13.475 Test: blockdev write read invalid size ...passed 00:12:13.475 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:13.475 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:13.475 Test: blockdev write read max offset ...passed 00:12:13.475 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:13.475 Test: blockdev writev readv 8 blocks ...passed 00:12:13.475 Test: blockdev writev readv 30 x 1block ...passed 00:12:13.475 Test: blockdev writev readv block ...passed 00:12:13.475 Test: blockdev writev readv size > 128k ...passed 00:12:13.475 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:13.475 Test: blockdev comparev and writev ...passed 00:12:13.475 Test: blockdev nvme passthru rw ...passed 00:12:13.475 Test: blockdev nvme passthru vendor specific ...passed 00:12:13.475 Test: blockdev nvme admin passthru ...passed 00:12:13.475 Test: blockdev copy ...passed 00:12:13.475 Suite: bdevio tests on: nvme2n3 00:12:13.475 Test: blockdev write read block ...passed 00:12:13.475 Test: blockdev write zeroes read block ...passed 00:12:13.475 Test: blockdev write zeroes read no split ...passed 00:12:13.475 Test: blockdev write zeroes read split ...passed 00:12:13.475 Test: blockdev write zeroes read split partial ...passed 00:12:13.475 Test: blockdev reset ...passed 00:12:13.475 Test: blockdev write read 8 blocks ...passed 00:12:13.475 Test: blockdev write read size > 128k ...passed 00:12:13.475 Test: blockdev write read invalid size ...passed 00:12:13.475 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:13.475 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:13.475 Test: blockdev write read max offset ...passed 00:12:13.475 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:13.475 Test: blockdev writev readv 8 blocks ...passed 00:12:13.475 Test: blockdev writev readv 30 x 1block ...passed 00:12:13.475 Test: blockdev writev readv block ...passed 00:12:13.475 Test: blockdev writev readv size > 128k ...passed 00:12:13.475 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:13.475 Test: blockdev comparev and writev ...passed 00:12:13.475 Test: blockdev nvme passthru rw ...passed 00:12:13.475 Test: blockdev nvme passthru vendor specific ...passed 00:12:13.475 Test: blockdev nvme admin passthru ...passed 00:12:13.475 Test: blockdev copy ...passed 00:12:13.475 Suite: bdevio tests on: nvme2n2 00:12:13.475 Test: blockdev write read block ...passed 00:12:13.475 Test: blockdev write zeroes read block ...passed 00:12:13.475 Test: blockdev write zeroes read no split ...passed 00:12:13.475 Test: blockdev write zeroes read split ...passed 00:12:13.475 Test: blockdev write zeroes read split partial ...passed 00:12:13.475 Test: blockdev reset ...passed 00:12:13.475 Test: blockdev write read 8 blocks ...passed 00:12:13.475 Test: blockdev write read size > 128k ...passed 00:12:13.475 Test: blockdev write read invalid size ...passed 00:12:13.475 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:13.475 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:13.475 Test: blockdev write read max offset ...passed 00:12:13.475 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:13.475 Test: blockdev writev readv 8 blocks ...passed 00:12:13.475 Test: blockdev writev readv 30 x 1block ...passed 00:12:13.475 Test: blockdev writev readv block ...passed 00:12:13.475 Test: blockdev writev readv size > 128k ...passed 00:12:13.475 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:13.475 Test: blockdev comparev and writev ...passed 00:12:13.475 Test: blockdev nvme passthru rw ...passed 00:12:13.475 Test: blockdev nvme passthru vendor specific ...passed 00:12:13.475 Test: blockdev nvme admin passthru ...passed 00:12:13.475 Test: blockdev copy ...passed 00:12:13.475 Suite: bdevio tests on: nvme2n1 00:12:13.475 Test: blockdev write read block ...passed 00:12:13.475 Test: blockdev write zeroes read block ...passed 00:12:13.475 Test: blockdev write zeroes read no split ...passed 00:12:13.475 Test: blockdev write zeroes read split ...passed 00:12:13.475 Test: blockdev write zeroes read split partial ...passed 00:12:13.475 Test: blockdev reset ...passed 00:12:13.475 Test: blockdev write read 8 blocks ...passed 00:12:13.475 Test: blockdev write read size > 128k ...passed 00:12:13.475 Test: blockdev write read invalid size ...passed 00:12:13.475 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:13.475 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:13.475 Test: blockdev write read max offset ...passed 00:12:13.737 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:13.737 Test: blockdev writev readv 8 blocks ...passed 00:12:13.737 Test: blockdev writev readv 30 x 1block ...passed 00:12:13.737 Test: blockdev writev readv block ...passed 00:12:13.737 Test: blockdev writev readv size > 128k ...passed 00:12:13.737 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:13.737 Test: blockdev comparev and writev ...passed 00:12:13.737 Test: blockdev nvme passthru rw ...passed 00:12:13.737 Test: blockdev nvme passthru vendor specific ...passed 00:12:13.737 Test: blockdev nvme admin passthru ...passed 00:12:13.737 Test: blockdev copy ...passed 00:12:13.737 Suite: bdevio tests on: nvme1n1 00:12:13.737 Test: blockdev write read block ...passed 00:12:13.737 Test: blockdev write zeroes read block ...passed 00:12:13.737 Test: blockdev write zeroes read no split ...passed 00:12:13.737 Test: blockdev write zeroes read split ...passed 00:12:13.737 Test: blockdev write zeroes read split partial ...passed 00:12:13.737 Test: blockdev reset ...passed 00:12:13.737 Test: blockdev write read 8 blocks ...passed 00:12:13.737 Test: blockdev write read size > 128k ...passed 00:12:13.737 Test: blockdev write read invalid size ...passed 00:12:13.737 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:13.737 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:13.737 Test: blockdev write read max offset ...passed 00:12:13.737 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:13.737 Test: blockdev writev readv 8 blocks ...passed 00:12:13.737 Test: blockdev writev readv 30 x 1block ...passed 00:12:13.737 Test: blockdev writev readv block ...passed 00:12:13.737 Test: blockdev writev readv size > 128k ...passed 00:12:13.737 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:13.737 Test: blockdev comparev and writev ...passed 00:12:13.737 Test: blockdev nvme passthru rw ...passed 00:12:13.737 Test: blockdev nvme passthru vendor specific ...passed 00:12:13.737 Test: blockdev nvme admin passthru ...passed 00:12:13.737 Test: blockdev copy ...passed 00:12:13.737 Suite: bdevio tests on: nvme0n1 00:12:13.737 Test: blockdev write read block ...passed 00:12:13.737 Test: blockdev write zeroes read block ...passed 00:12:13.737 Test: blockdev write zeroes read no split ...passed 00:12:13.737 Test: blockdev write zeroes read split ...passed 00:12:13.737 Test: blockdev write zeroes read split partial ...passed 00:12:13.737 Test: blockdev reset ...passed 00:12:13.737 Test: blockdev write read 8 blocks ...passed 00:12:13.737 Test: blockdev write read size > 128k ...passed 00:12:13.737 Test: blockdev write read invalid size ...passed 00:12:13.737 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:13.737 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:13.737 Test: blockdev write read max offset ...passed 00:12:13.737 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:13.737 Test: blockdev writev readv 8 blocks ...passed 00:12:13.737 Test: blockdev writev readv 30 x 1block ...passed 00:12:13.737 Test: blockdev writev readv block ...passed 00:12:13.737 Test: blockdev writev readv size > 128k ...passed 00:12:13.737 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:13.737 Test: blockdev comparev and writev ...passed 00:12:13.737 Test: blockdev nvme passthru rw ...passed 00:12:13.737 Test: blockdev nvme passthru vendor specific ...passed 00:12:13.737 Test: blockdev nvme admin passthru ...passed 00:12:13.737 Test: blockdev copy ...passed 00:12:13.737 00:12:13.737 Run Summary: Type Total Ran Passed Failed Inactive 00:12:13.737 suites 6 6 n/a 0 0 00:12:13.737 tests 138 138 138 0 0 00:12:13.737 asserts 780 780 780 0 n/a 00:12:13.737 00:12:13.737 Elapsed time = 1.204 seconds 00:12:13.737 0 00:12:13.737 17:15:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69406 00:12:13.737 17:15:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 69406 ']' 00:12:13.737 17:15:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 69406 00:12:13.737 17:15:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:12:13.737 17:15:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:13.737 17:15:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69406 00:12:13.737 17:15:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:13.737 17:15:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:13.737 17:15:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69406' 00:12:13.737 killing process with pid 69406 00:12:13.737 17:15:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 69406 00:12:13.737 17:15:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 69406 00:12:14.682 17:15:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:14.682 00:12:14.682 real 0m2.329s 00:12:14.682 user 0m5.668s 00:12:14.682 sys 0m0.385s 00:12:14.682 17:15:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:14.682 17:15:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:14.682 ************************************ 00:12:14.682 END TEST bdev_bounds 00:12:14.682 ************************************ 00:12:14.682 17:15:57 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:14.682 17:15:57 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:14.682 17:15:57 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:14.682 17:15:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:14.682 ************************************ 00:12:14.682 START TEST bdev_nbd 00:12:14.682 ************************************ 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69468 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69468 /var/tmp/spdk-nbd.sock 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 69468 ']' 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:14.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:14.682 17:15:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:14.682 [2024-10-30 17:15:57.611663] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:12:14.682 [2024-10-30 17:15:57.612010] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.943 [2024-10-30 17:15:57.778543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.943 [2024-10-30 17:15:57.899936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.516 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:15.516 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:12:15.516 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:15.516 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:15.516 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:15.516 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:15.516 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:15.516 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:15.516 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:15.516 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:15.516 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:15.516 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:15.516 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:15.516 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:15.516 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:15.777 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:15.777 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:15.777 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:15.777 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:15.777 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:15.777 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:15.777 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:15.777 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:15.777 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:15.777 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:15.777 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:15.777 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:15.777 1+0 records in 00:12:15.777 1+0 records out 00:12:15.777 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046282 s, 8.9 MB/s 00:12:15.777 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.777 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:15.777 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.777 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:15.777 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:15.777 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:15.777 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:15.777 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:16.039 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:16.039 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:16.039 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:16.039 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:16.039 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:16.039 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:16.039 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:16.039 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:16.039 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:16.039 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:16.039 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:16.039 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:16.039 1+0 records in 00:12:16.039 1+0 records out 00:12:16.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417444 s, 9.8 MB/s 00:12:16.039 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.039 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:16.039 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.039 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:16.039 17:15:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:16.039 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:16.039 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:16.039 17:15:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:16.299 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:16.299 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:16.299 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:16.299 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:12:16.299 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:16.299 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:16.299 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:16.299 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:12:16.299 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:16.299 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:16.299 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:16.299 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:16.299 1+0 records in 00:12:16.299 1+0 records out 00:12:16.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663973 s, 6.2 MB/s 00:12:16.299 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.299 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:16.299 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.299 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:16.299 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:16.299 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:16.299 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:16.299 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:16.559 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:16.559 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:16.559 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:16.559 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:12:16.559 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:16.559 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:16.559 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:16.559 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:12:16.559 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:16.559 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:16.559 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:16.559 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:16.559 1+0 records in 00:12:16.559 1+0 records out 00:12:16.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000747011 s, 5.5 MB/s 00:12:16.559 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.559 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:16.559 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.559 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:16.559 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:16.559 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:16.559 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:16.559 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:16.820 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:16.820 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:16.820 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:16.820 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:12:16.820 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:16.820 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:16.820 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:16.820 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:12:16.820 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:16.820 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:16.820 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:16.820 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:16.820 1+0 records in 00:12:16.820 1+0 records out 00:12:16.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000745048 s, 5.5 MB/s 00:12:16.820 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.820 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:16.820 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.820 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:16.820 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:16.820 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:16.820 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:16.820 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:17.079 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:17.079 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:17.079 17:15:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:17.079 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:12:17.079 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:17.079 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:17.079 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:17.079 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:12:17.079 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:17.079 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:17.079 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:17.079 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:17.079 1+0 records in 00:12:17.079 1+0 records out 00:12:17.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111068 s, 3.7 MB/s 00:12:17.079 17:15:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.079 17:16:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:17.079 17:16:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.079 17:16:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:17.079 17:16:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:17.079 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:17.079 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:17.079 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:17.339 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:17.339 { 00:12:17.339 "nbd_device": "/dev/nbd0", 00:12:17.339 "bdev_name": "nvme0n1" 00:12:17.339 }, 00:12:17.339 { 00:12:17.339 "nbd_device": "/dev/nbd1", 00:12:17.339 "bdev_name": "nvme1n1" 00:12:17.339 }, 00:12:17.339 { 00:12:17.339 "nbd_device": "/dev/nbd2", 00:12:17.339 "bdev_name": "nvme2n1" 00:12:17.339 }, 00:12:17.339 { 00:12:17.339 "nbd_device": "/dev/nbd3", 00:12:17.339 "bdev_name": "nvme2n2" 00:12:17.339 }, 00:12:17.339 { 00:12:17.339 "nbd_device": "/dev/nbd4", 00:12:17.339 "bdev_name": "nvme2n3" 00:12:17.339 }, 00:12:17.339 { 00:12:17.339 "nbd_device": "/dev/nbd5", 00:12:17.339 "bdev_name": "nvme3n1" 00:12:17.339 } 00:12:17.339 ]' 00:12:17.339 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:17.339 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:17.339 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:17.339 { 00:12:17.339 "nbd_device": "/dev/nbd0", 00:12:17.339 "bdev_name": "nvme0n1" 00:12:17.339 }, 00:12:17.339 { 00:12:17.339 "nbd_device": "/dev/nbd1", 00:12:17.339 "bdev_name": "nvme1n1" 00:12:17.339 }, 00:12:17.339 { 00:12:17.339 "nbd_device": "/dev/nbd2", 00:12:17.339 "bdev_name": "nvme2n1" 00:12:17.339 }, 00:12:17.339 { 00:12:17.339 "nbd_device": "/dev/nbd3", 00:12:17.339 "bdev_name": "nvme2n2" 00:12:17.339 }, 00:12:17.339 { 00:12:17.339 "nbd_device": "/dev/nbd4", 00:12:17.339 "bdev_name": "nvme2n3" 00:12:17.339 }, 00:12:17.339 { 00:12:17.339 "nbd_device": "/dev/nbd5", 00:12:17.339 "bdev_name": "nvme3n1" 00:12:17.339 } 00:12:17.339 ]' 00:12:17.339 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:17.339 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:17.339 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:17.339 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:17.339 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:17.339 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:17.339 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:17.603 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:17.603 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:17.603 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:17.603 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:17.603 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:17.603 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:17.603 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:17.603 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:17.603 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:17.603 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:17.865 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:17.865 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:17.865 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:17.865 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:17.865 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:17.865 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:17.865 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:17.865 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:17.865 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:17.865 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:18.126 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:18.126 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:18.126 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:18.126 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.126 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.126 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:18.126 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:18.126 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.126 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.126 17:16:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:18.387 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:18.387 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:18.387 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:18.387 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.387 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.387 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:18.387 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:18.387 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.387 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.387 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:18.387 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:18.387 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:18.387 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:18.387 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.387 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.387 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:18.387 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:18.387 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.387 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.387 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:18.648 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:18.648 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:18.648 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:18.648 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.648 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.648 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:18.648 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:18.648 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.648 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:18.648 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:18.648 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:18.909 17:16:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:19.170 /dev/nbd0 00:12:19.170 17:16:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:19.170 17:16:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:19.170 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:19.170 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:19.170 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:19.170 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:19.170 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:19.170 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:19.170 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:19.170 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:19.170 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.170 1+0 records in 00:12:19.170 1+0 records out 00:12:19.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00079287 s, 5.2 MB/s 00:12:19.170 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.170 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:19.170 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.170 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:19.170 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:19.170 17:16:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.170 17:16:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:19.170 17:16:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:19.431 /dev/nbd1 00:12:19.431 17:16:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:19.431 17:16:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:19.431 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:19.431 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:19.431 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:19.431 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:19.431 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:19.431 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:19.431 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:19.431 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:19.431 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.431 1+0 records in 00:12:19.431 1+0 records out 00:12:19.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123091 s, 3.3 MB/s 00:12:19.432 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.432 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:19.432 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.432 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:19.432 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:19.432 17:16:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.432 17:16:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:19.432 17:16:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:12:19.693 /dev/nbd10 00:12:19.693 17:16:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:19.693 17:16:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:19.693 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:12:19.693 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:19.693 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:19.693 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:19.693 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:12:19.693 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:19.693 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:19.693 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:19.693 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.693 1+0 records in 00:12:19.693 1+0 records out 00:12:19.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00148014 s, 2.8 MB/s 00:12:19.693 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.693 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:19.693 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.693 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:19.693 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:19.693 17:16:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.693 17:16:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:19.693 17:16:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:12:19.955 /dev/nbd11 00:12:19.955 17:16:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:19.955 17:16:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:19.956 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:12:19.956 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:19.956 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:19.956 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:19.956 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:12:19.956 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:19.956 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:19.956 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:19.956 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.956 1+0 records in 00:12:19.956 1+0 records out 00:12:19.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00153071 s, 2.7 MB/s 00:12:19.956 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.956 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:19.956 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.956 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:19.956 17:16:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:19.956 17:16:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.956 17:16:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:19.956 17:16:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:12:20.217 /dev/nbd12 00:12:20.217 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:20.217 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:20.217 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:12:20.217 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:20.217 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:20.217 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:20.217 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:12:20.217 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:20.217 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:20.217 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:20.217 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:20.217 1+0 records in 00:12:20.217 1+0 records out 00:12:20.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130441 s, 3.1 MB/s 00:12:20.217 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.217 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:20.217 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.217 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:20.217 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:20.217 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:20.217 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:20.217 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:20.479 /dev/nbd13 00:12:20.479 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:20.479 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:20.479 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:12:20.479 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:20.479 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:20.479 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:20.479 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:12:20.479 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:20.479 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:20.479 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:20.479 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:20.479 1+0 records in 00:12:20.479 1+0 records out 00:12:20.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010771 s, 3.8 MB/s 00:12:20.479 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.479 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:20.479 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.479 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:20.480 17:16:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:20.480 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:20.480 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:20.480 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:20.480 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:20.480 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:20.741 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:20.741 { 00:12:20.741 "nbd_device": "/dev/nbd0", 00:12:20.741 "bdev_name": "nvme0n1" 00:12:20.741 }, 00:12:20.741 { 00:12:20.741 "nbd_device": "/dev/nbd1", 00:12:20.741 "bdev_name": "nvme1n1" 00:12:20.741 }, 00:12:20.741 { 00:12:20.741 "nbd_device": "/dev/nbd10", 00:12:20.741 "bdev_name": "nvme2n1" 00:12:20.741 }, 00:12:20.741 { 00:12:20.741 "nbd_device": "/dev/nbd11", 00:12:20.741 "bdev_name": "nvme2n2" 00:12:20.741 }, 00:12:20.741 { 00:12:20.741 "nbd_device": "/dev/nbd12", 00:12:20.741 "bdev_name": "nvme2n3" 00:12:20.741 }, 00:12:20.741 { 00:12:20.741 "nbd_device": "/dev/nbd13", 00:12:20.741 "bdev_name": "nvme3n1" 00:12:20.741 } 00:12:20.741 ]' 00:12:20.741 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:20.741 { 00:12:20.741 "nbd_device": "/dev/nbd0", 00:12:20.741 "bdev_name": "nvme0n1" 00:12:20.741 }, 00:12:20.741 { 00:12:20.741 "nbd_device": "/dev/nbd1", 00:12:20.741 "bdev_name": "nvme1n1" 00:12:20.741 }, 00:12:20.741 { 00:12:20.741 "nbd_device": "/dev/nbd10", 00:12:20.741 "bdev_name": "nvme2n1" 00:12:20.741 }, 00:12:20.741 { 00:12:20.741 "nbd_device": "/dev/nbd11", 00:12:20.741 "bdev_name": "nvme2n2" 00:12:20.741 }, 00:12:20.741 { 00:12:20.741 "nbd_device": "/dev/nbd12", 00:12:20.741 "bdev_name": "nvme2n3" 00:12:20.741 }, 00:12:20.741 { 00:12:20.742 "nbd_device": "/dev/nbd13", 00:12:20.742 "bdev_name": "nvme3n1" 00:12:20.742 } 00:12:20.742 ]' 00:12:20.742 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:20.742 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:20.742 /dev/nbd1 00:12:20.742 /dev/nbd10 00:12:20.742 /dev/nbd11 00:12:20.742 /dev/nbd12 00:12:20.742 /dev/nbd13' 00:12:20.742 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:20.742 /dev/nbd1 00:12:20.742 /dev/nbd10 00:12:20.742 /dev/nbd11 00:12:20.742 /dev/nbd12 00:12:20.742 /dev/nbd13' 00:12:20.742 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:20.742 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:20.742 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:20.742 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:20.742 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:20.742 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:20.742 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:20.742 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:20.742 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:20.742 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:20.742 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:20.742 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:20.742 256+0 records in 00:12:20.742 256+0 records out 00:12:20.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00662955 s, 158 MB/s 00:12:20.742 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:20.742 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:21.003 256+0 records in 00:12:21.003 256+0 records out 00:12:21.003 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.237685 s, 4.4 MB/s 00:12:21.003 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:21.003 17:16:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:21.264 256+0 records in 00:12:21.264 256+0 records out 00:12:21.264 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.279859 s, 3.7 MB/s 00:12:21.264 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:21.264 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:21.526 256+0 records in 00:12:21.526 256+0 records out 00:12:21.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.22759 s, 4.6 MB/s 00:12:21.526 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:21.526 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:21.526 256+0 records in 00:12:21.526 256+0 records out 00:12:21.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0994535 s, 10.5 MB/s 00:12:21.526 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:21.526 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:21.787 256+0 records in 00:12:21.787 256+0 records out 00:12:21.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.183678 s, 5.7 MB/s 00:12:21.787 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:21.787 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:22.049 256+0 records in 00:12:22.049 256+0 records out 00:12:22.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.219502 s, 4.8 MB/s 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.049 17:16:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:22.311 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:22.311 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:22.311 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:22.311 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.311 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.311 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:22.311 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:22.311 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.311 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.311 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:22.573 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:22.573 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:22.573 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:22.573 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.573 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.573 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:22.573 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:22.573 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.573 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.573 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:22.573 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:22.573 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:22.574 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:22.574 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.574 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.574 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:22.835 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:22.835 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.835 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.835 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:22.835 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:22.835 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:22.835 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:22.835 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.835 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.835 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:22.835 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:22.836 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.836 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.836 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:23.097 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:23.097 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:23.097 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:23.097 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.097 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.097 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:23.097 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:23.097 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.097 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.097 17:16:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:23.358 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:23.358 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:23.358 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:23.358 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.358 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.359 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:23.359 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:23.359 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.359 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:23.359 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:23.359 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:23.620 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:23.620 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:23.620 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:23.620 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:23.620 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:23.620 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:23.620 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:23.620 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:23.620 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:23.620 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:23.620 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:23.620 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:23.620 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:23.620 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:23.620 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:23.620 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:23.882 malloc_lvol_verify 00:12:23.882 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:23.882 1c82c6cf-0f46-4792-8bc3-c15026f43ca9 00:12:24.144 17:16:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:24.144 2e426a56-553a-4ea7-ac41-a8524c93b733 00:12:24.144 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:24.404 /dev/nbd0 00:12:24.404 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:24.404 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:24.404 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:24.404 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:24.404 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:24.404 mke2fs 1.47.0 (5-Feb-2023) 00:12:24.404 Discarding device blocks: 0/4096 done 00:12:24.404 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:24.404 00:12:24.404 Allocating group tables: 0/1 done 00:12:24.404 Writing inode tables: 0/1 done 00:12:24.404 Creating journal (1024 blocks): done 00:12:24.404 Writing superblocks and filesystem accounting information: 0/1 done 00:12:24.404 00:12:24.404 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:24.404 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:24.404 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:24.404 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:24.404 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:24.404 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.404 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:24.665 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:24.665 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:24.665 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:24.665 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.665 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.665 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:24.665 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:24.665 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.665 17:16:07 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69468 00:12:24.665 17:16:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 69468 ']' 00:12:24.665 17:16:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 69468 00:12:24.665 17:16:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:12:24.665 17:16:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:24.665 17:16:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69468 00:12:24.665 killing process with pid 69468 00:12:24.665 17:16:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:24.665 17:16:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:24.665 17:16:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69468' 00:12:24.665 17:16:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 69468 00:12:24.665 17:16:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 69468 00:12:25.238 17:16:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:25.238 00:12:25.238 real 0m10.567s 00:12:25.238 user 0m14.423s 00:12:25.238 sys 0m3.589s 00:12:25.238 17:16:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:25.238 17:16:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:25.238 ************************************ 00:12:25.238 END TEST bdev_nbd 00:12:25.238 ************************************ 00:12:25.238 17:16:08 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:12:25.238 17:16:08 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:12:25.238 17:16:08 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:12:25.238 17:16:08 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:12:25.238 17:16:08 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:25.238 17:16:08 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:25.238 17:16:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.238 ************************************ 00:12:25.238 START TEST bdev_fio 00:12:25.238 ************************************ 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:12:25.238 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:25.238 17:16:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:25.500 ************************************ 00:12:25.500 START TEST bdev_fio_rw_verify 00:12:25.500 ************************************ 00:12:25.500 17:16:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:25.500 17:16:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:25.500 17:16:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:12:25.500 17:16:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:25.500 17:16:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:12:25.500 17:16:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:25.500 17:16:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:12:25.500 17:16:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:12:25.500 17:16:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:12:25.500 17:16:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:25.500 17:16:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:12:25.500 17:16:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:12:25.500 17:16:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:25.500 17:16:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:25.500 17:16:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:12:25.500 17:16:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:25.500 17:16:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:25.500 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:25.500 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:25.500 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:25.500 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:25.500 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:25.500 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:25.500 fio-3.35 00:12:25.500 Starting 6 threads 00:12:37.735 00:12:37.735 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=69873: Wed Oct 30 17:16:19 2024 00:12:37.735 read: IOPS=16.9k, BW=65.9MiB/s (69.1MB/s)(659MiB/10004msec) 00:12:37.735 slat (usec): min=2, max=2716, avg= 5.69, stdev=18.36 00:12:37.735 clat (usec): min=69, max=8390, avg=1061.23, stdev=910.39 00:12:37.735 lat (usec): min=72, max=8411, avg=1066.92, stdev=911.29 00:12:37.735 clat percentiles (usec): 00:12:37.735 | 50.000th=[ 742], 99.000th=[ 4047], 99.900th=[ 5669], 99.990th=[ 7504], 00:12:37.735 | 99.999th=[ 8356] 00:12:37.735 write: IOPS=17.1k, BW=66.9MiB/s (70.2MB/s)(670MiB/10004msec); 0 zone resets 00:12:37.735 slat (usec): min=12, max=4959, avg=39.18, stdev=153.63 00:12:37.735 clat (usec): min=73, max=14908, avg=1448.36, stdev=1266.18 00:12:37.735 lat (usec): min=87, max=14936, avg=1487.54, stdev=1283.33 00:12:37.735 clat percentiles (usec): 00:12:37.735 | 50.000th=[ 1106], 99.000th=[ 6259], 99.900th=[10028], 99.990th=[12387], 00:12:37.735 | 99.999th=[13960] 00:12:37.735 bw ( KiB/s): min=39877, max=173656, per=100.00%, avg=70205.63, stdev=6015.28, samples=114 00:12:37.735 iops : min= 9968, max=43414, avg=17550.68, stdev=1503.87, samples=114 00:12:37.735 lat (usec) : 100=0.09%, 250=8.91%, 500=21.48%, 750=13.62%, 1000=8.77% 00:12:37.735 lat (msec) : 2=27.04%, 4=17.57%, 10=2.47%, 20=0.05% 00:12:37.735 cpu : usr=43.32%, sys=33.07%, ctx=5683, majf=0, minf=16329 00:12:37.735 IO depths : 1=11.3%, 2=23.6%, 4=51.2%, 8=13.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:37.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.735 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.735 issued rwts: total=168826,171439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.735 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:37.735 00:12:37.735 Run status group 0 (all jobs): 00:12:37.735 READ: bw=65.9MiB/s (69.1MB/s), 65.9MiB/s-65.9MiB/s (69.1MB/s-69.1MB/s), io=659MiB (692MB), run=10004-10004msec 00:12:37.735 WRITE: bw=66.9MiB/s (70.2MB/s), 66.9MiB/s-66.9MiB/s (70.2MB/s-70.2MB/s), io=670MiB (702MB), run=10004-10004msec 00:12:37.735 ----------------------------------------------------- 00:12:37.736 Suppressions used: 00:12:37.736 count bytes template 00:12:37.736 6 48 /usr/src/fio/parse.c 00:12:37.736 2498 239808 /usr/src/fio/iolog.c 00:12:37.736 1 8 libtcmalloc_minimal.so 00:12:37.736 1 904 libcrypto.so 00:12:37.736 ----------------------------------------------------- 00:12:37.736 00:12:37.736 00:12:37.736 real 0m11.885s 00:12:37.736 user 0m27.458s 00:12:37.736 sys 0m20.142s 00:12:37.736 ************************************ 00:12:37.736 END TEST bdev_fio_rw_verify 00:12:37.736 ************************************ 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "46fe66e1-45ff-4114-b833-50d5456619c7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "46fe66e1-45ff-4114-b833-50d5456619c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "ed51db8c-9de1-458c-b747-e1d3ffa44371"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ed51db8c-9de1-458c-b747-e1d3ffa44371",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a09eb499-cbea-433c-80a6-b79bc08d1d74"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a09eb499-cbea-433c-80a6-b79bc08d1d74",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "5e7950d0-8f61-4bd0-8fa2-7ec42d6a1e04"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5e7950d0-8f61-4bd0-8fa2-7ec42d6a1e04",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "66566266-1ed8-4a8f-8125-76b5f0724c7f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "66566266-1ed8-4a8f-8125-76b5f0724c7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "f655ef1b-51c9-4122-9562-748dce00822d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f655ef1b-51c9-4122-9562-748dce00822d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:37.736 /home/vagrant/spdk_repo/spdk 00:12:37.736 ************************************ 00:12:37.736 END TEST bdev_fio 00:12:37.736 ************************************ 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:12:37.736 00:12:37.736 real 0m12.055s 00:12:37.736 user 0m27.530s 00:12:37.736 sys 0m20.220s 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:37.736 17:16:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:37.736 17:16:20 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:37.736 17:16:20 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:37.736 17:16:20 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:12:37.736 17:16:20 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:37.736 17:16:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:37.736 ************************************ 00:12:37.736 START TEST bdev_verify 00:12:37.736 ************************************ 00:12:37.736 17:16:20 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:37.736 [2024-10-30 17:16:20.341872] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:12:37.736 [2024-10-30 17:16:20.342243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70043 ] 00:12:37.736 [2024-10-30 17:16:20.506288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:37.736 [2024-10-30 17:16:20.628459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.736 [2024-10-30 17:16:20.628604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.308 Running I/O for 5 seconds... 00:12:40.267 22624.00 IOPS, 88.38 MiB/s [2024-10-30T17:16:24.632Z] 22240.00 IOPS, 86.88 MiB/s [2024-10-30T17:16:25.567Z] 22602.67 IOPS, 88.29 MiB/s [2024-10-30T17:16:26.503Z] 22384.00 IOPS, 87.44 MiB/s [2024-10-30T17:16:26.503Z] 22195.00 IOPS, 86.70 MiB/s 00:12:43.522 Latency(us) 00:12:43.522 [2024-10-30T17:16:26.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.522 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.522 Verification LBA range: start 0x0 length 0xa0000 00:12:43.522 nvme0n1 : 5.08 1738.52 6.79 0.00 0.00 73483.75 13712.15 73400.32 00:12:43.522 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.522 Verification LBA range: start 0xa0000 length 0xa0000 00:12:43.522 nvme0n1 : 5.08 1738.05 6.79 0.00 0.00 73505.11 13208.02 79046.50 00:12:43.522 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.522 Verification LBA range: start 0x0 length 0xbd0bd 00:12:43.522 nvme1n1 : 5.08 2124.76 8.30 0.00 0.00 59840.51 6704.84 62107.96 00:12:43.522 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.522 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:43.522 nvme1n1 : 5.07 2242.25 8.76 0.00 0.00 56836.91 7410.61 62107.96 00:12:43.522 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.522 Verification LBA range: start 0x0 length 0x80000 00:12:43.522 nvme2n1 : 5.08 1789.55 6.99 0.00 0.00 71087.21 10284.11 70577.23 00:12:43.522 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.522 Verification LBA range: start 0x80000 length 0x80000 00:12:43.522 nvme2n1 : 5.05 1748.75 6.83 0.00 0.00 72750.69 11897.30 72593.72 00:12:43.522 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.522 Verification LBA range: start 0x0 length 0x80000 00:12:43.522 nvme2n2 : 5.08 1762.61 6.89 0.00 0.00 71940.08 10284.11 73803.62 00:12:43.522 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.523 Verification LBA range: start 0x80000 length 0x80000 00:12:43.523 nvme2n2 : 5.09 1761.98 6.88 0.00 0.00 71955.01 10485.76 68964.04 00:12:43.523 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.523 Verification LBA range: start 0x0 length 0x80000 00:12:43.523 nvme2n3 : 5.07 1743.24 6.81 0.00 0.00 72566.13 8166.79 68964.04 00:12:43.523 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.523 Verification LBA range: start 0x80000 length 0x80000 00:12:43.523 nvme2n3 : 5.07 1740.59 6.80 0.00 0.00 72676.99 10485.76 69367.34 00:12:43.523 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.523 Verification LBA range: start 0x0 length 0x20000 00:12:43.523 nvme3n1 : 5.09 1760.30 6.88 0.00 0.00 71714.27 4537.11 71787.13 00:12:43.523 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.523 Verification LBA range: start 0x20000 length 0x20000 00:12:43.523 nvme3n1 : 5.09 1760.85 6.88 0.00 0.00 71693.33 7158.55 70173.93 00:12:43.523 [2024-10-30T17:16:26.504Z] =================================================================================================================== 00:12:43.523 [2024-10-30T17:16:26.504Z] Total : 21911.46 85.59 0.00 0.00 69537.43 4537.11 79046.50 00:12:44.090 00:12:44.090 real 0m6.656s 00:12:44.090 user 0m10.900s 00:12:44.090 sys 0m1.353s 00:12:44.090 17:16:26 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:44.090 ************************************ 00:12:44.090 END TEST bdev_verify 00:12:44.090 17:16:26 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:44.090 ************************************ 00:12:44.090 17:16:26 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:44.090 17:16:26 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:12:44.090 17:16:26 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:44.090 17:16:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:44.090 ************************************ 00:12:44.090 START TEST bdev_verify_big_io 00:12:44.090 ************************************ 00:12:44.090 17:16:26 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:44.090 [2024-10-30 17:16:27.056331] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:12:44.090 [2024-10-30 17:16:27.056560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70137 ] 00:12:44.349 [2024-10-30 17:16:27.216297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:44.349 [2024-10-30 17:16:27.312377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.349 [2024-10-30 17:16:27.312461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.915 Running I/O for 5 seconds... 00:12:50.131 1280.00 IOPS, 80.00 MiB/s [2024-10-30T17:16:34.051Z] 2024.00 IOPS, 126.50 MiB/s [2024-10-30T17:16:34.051Z] 2762.67 IOPS, 172.67 MiB/s 00:12:51.070 Latency(us) 00:12:51.070 [2024-10-30T17:16:34.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.070 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:51.070 Verification LBA range: start 0x0 length 0xa000 00:12:51.070 nvme0n1 : 6.15 104.00 6.50 0.00 0.00 1170456.58 238752.69 1277649.53 00:12:51.070 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:51.070 Verification LBA range: start 0xa000 length 0xa000 00:12:51.070 nvme0n1 : 6.00 114.73 7.17 0.00 0.00 1050644.66 107277.39 1858399.31 00:12:51.070 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:51.070 Verification LBA range: start 0x0 length 0xbd0b 00:12:51.070 nvme1n1 : 6.17 119.38 7.46 0.00 0.00 982404.50 8368.44 1058255.16 00:12:51.070 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:51.070 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:51.070 nvme1n1 : 5.89 130.47 8.15 0.00 0.00 894737.07 90338.86 1058255.16 00:12:51.070 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:51.070 Verification LBA range: start 0x0 length 0x8000 00:12:51.070 nvme2n1 : 6.09 99.90 6.24 0.00 0.00 1104792.36 262950.60 942105.21 00:12:51.070 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:51.070 Verification LBA range: start 0x8000 length 0x8000 00:12:51.070 nvme2n1 : 6.08 113.18 7.07 0.00 0.00 1007806.51 133088.49 1374441.16 00:12:51.070 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:51.070 Verification LBA range: start 0x0 length 0x8000 00:12:51.070 nvme2n2 : 6.17 134.85 8.43 0.00 0.00 830048.83 77433.30 838860.80 00:12:51.070 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:51.070 Verification LBA range: start 0x8000 length 0x8000 00:12:51.070 nvme2n2 : 6.13 93.93 5.87 0.00 0.00 1174884.04 132281.90 2129415.88 00:12:51.070 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:51.070 Verification LBA range: start 0x0 length 0x8000 00:12:51.070 nvme2n3 : 6.17 88.14 5.51 0.00 0.00 1221798.43 14317.10 2697260.11 00:12:51.070 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:51.070 Verification LBA range: start 0x8000 length 0x8000 00:12:51.070 nvme2n3 : 6.17 121.82 7.61 0.00 0.00 870335.51 89128.96 1548666.09 00:12:51.070 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:51.070 Verification LBA range: start 0x0 length 0x2000 00:12:51.070 nvme3n1 : 6.18 158.00 9.87 0.00 0.00 655532.98 6654.42 1690627.15 00:12:51.070 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:51.070 Verification LBA range: start 0x2000 length 0x2000 00:12:51.070 nvme3n1 : 6.18 121.61 7.60 0.00 0.00 843861.88 4184.22 2219754.73 00:12:51.070 [2024-10-30T17:16:34.051Z] =================================================================================================================== 00:12:51.070 [2024-10-30T17:16:34.051Z] Total : 1400.00 87.50 0.00 0.00 958889.83 4184.22 2697260.11 00:12:52.014 00:12:52.014 real 0m7.897s 00:12:52.014 user 0m14.618s 00:12:52.014 sys 0m0.377s 00:12:52.014 17:16:34 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:52.014 17:16:34 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.014 ************************************ 00:12:52.014 END TEST bdev_verify_big_io 00:12:52.014 ************************************ 00:12:52.014 17:16:34 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:52.014 17:16:34 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:12:52.014 17:16:34 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:52.014 17:16:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:52.014 ************************************ 00:12:52.014 START TEST bdev_write_zeroes 00:12:52.014 ************************************ 00:12:52.014 17:16:34 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:52.275 [2024-10-30 17:16:35.038328] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:12:52.275 [2024-10-30 17:16:35.038654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70247 ] 00:12:52.275 [2024-10-30 17:16:35.204226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.535 [2024-10-30 17:16:35.328087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.795 Running I/O for 1 seconds... 00:12:54.177 76864.00 IOPS, 300.25 MiB/s 00:12:54.177 Latency(us) 00:12:54.177 [2024-10-30T17:16:37.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.177 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:54.177 nvme0n1 : 1.02 12513.91 48.88 0.00 0.00 10217.18 6099.89 22282.24 00:12:54.177 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:54.177 nvme1n1 : 1.02 13642.32 53.29 0.00 0.00 9364.22 3932.16 18047.61 00:12:54.177 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:54.177 nvme2n1 : 1.02 12499.50 48.83 0.00 0.00 10213.15 6125.10 20870.70 00:12:54.177 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:54.177 nvme2n2 : 1.02 12556.74 49.05 0.00 0.00 10133.90 6150.30 20568.22 00:12:54.177 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:54.177 nvme2n3 : 1.02 12542.27 48.99 0.00 0.00 10108.78 5545.35 21173.17 00:12:54.177 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:54.177 nvme3n1 : 1.02 12528.18 48.94 0.00 0.00 10110.61 5570.56 21778.12 00:12:54.177 [2024-10-30T17:16:37.158Z] =================================================================================================================== 00:12:54.177 [2024-10-30T17:16:37.158Z] Total : 76282.91 297.98 0.00 0.00 10015.41 3932.16 22282.24 00:12:54.747 00:12:54.747 real 0m2.570s 00:12:54.747 user 0m1.906s 00:12:54.747 sys 0m0.485s 00:12:54.747 17:16:37 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:54.747 17:16:37 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:54.747 ************************************ 00:12:54.747 END TEST bdev_write_zeroes 00:12:54.747 ************************************ 00:12:54.747 17:16:37 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:54.747 17:16:37 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:12:54.747 17:16:37 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:54.747 17:16:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:54.747 ************************************ 00:12:54.747 START TEST bdev_json_nonenclosed 00:12:54.747 ************************************ 00:12:54.747 17:16:37 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:54.747 [2024-10-30 17:16:37.661958] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:12:54.747 [2024-10-30 17:16:37.662263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70300 ] 00:12:55.007 [2024-10-30 17:16:37.824486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.007 [2024-10-30 17:16:37.943003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.007 [2024-10-30 17:16:37.943297] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:55.007 [2024-10-30 17:16:37.943753] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:55.007 [2024-10-30 17:16:37.943778] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:55.268 00:12:55.268 real 0m0.541s 00:12:55.268 user 0m0.314s 00:12:55.268 sys 0m0.121s 00:12:55.268 17:16:38 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:55.268 ************************************ 00:12:55.268 END TEST bdev_json_nonenclosed 00:12:55.268 ************************************ 00:12:55.268 17:16:38 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:55.269 17:16:38 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:55.269 17:16:38 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:12:55.269 17:16:38 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:55.269 17:16:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.530 ************************************ 00:12:55.530 START TEST bdev_json_nonarray 00:12:55.530 ************************************ 00:12:55.530 17:16:38 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:55.530 [2024-10-30 17:16:38.325422] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:12:55.530 [2024-10-30 17:16:38.325560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70320 ] 00:12:55.530 [2024-10-30 17:16:38.490577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.791 [2024-10-30 17:16:38.609505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.791 [2024-10-30 17:16:38.609619] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:55.791 [2024-10-30 17:16:38.609640] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:55.791 [2024-10-30 17:16:38.609651] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:56.051 00:12:56.051 real 0m0.546s 00:12:56.051 user 0m0.325s 00:12:56.051 sys 0m0.114s 00:12:56.051 ************************************ 00:12:56.051 END TEST bdev_json_nonarray 00:12:56.051 ************************************ 00:12:56.051 17:16:38 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:56.051 17:16:38 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:56.051 17:16:38 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:12:56.051 17:16:38 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:12:56.051 17:16:38 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:12:56.051 17:16:38 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:12:56.051 17:16:38 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:12:56.051 17:16:38 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:56.051 17:16:38 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:56.051 17:16:38 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:12:56.051 17:16:38 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:12:56.051 17:16:38 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:12:56.051 17:16:38 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:12:56.051 17:16:38 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:56.624 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:59.924 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:00.183 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:00.183 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:00.183 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:00.183 00:13:00.183 real 0m57.340s 00:13:00.183 user 1m25.515s 00:13:00.183 sys 0m37.628s 00:13:00.183 ************************************ 00:13:00.183 END TEST blockdev_xnvme 00:13:00.183 ************************************ 00:13:00.183 17:16:43 blockdev_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:00.183 17:16:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:00.443 17:16:43 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:00.443 17:16:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:00.443 17:16:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:00.443 17:16:43 -- common/autotest_common.sh@10 -- # set +x 00:13:00.443 ************************************ 00:13:00.443 START TEST ublk 00:13:00.443 ************************************ 00:13:00.443 17:16:43 ublk -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:00.443 * Looking for test storage... 00:13:00.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:00.443 17:16:43 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:00.443 17:16:43 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:13:00.443 17:16:43 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:00.443 17:16:43 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:00.443 17:16:43 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:00.443 17:16:43 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:00.443 17:16:43 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:00.443 17:16:43 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.443 17:16:43 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:00.444 17:16:43 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:00.444 17:16:43 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:00.444 17:16:43 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:00.444 17:16:43 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:00.444 17:16:43 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:00.444 17:16:43 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:00.444 17:16:43 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:00.444 17:16:43 ublk -- scripts/common.sh@345 -- # : 1 00:13:00.444 17:16:43 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:00.444 17:16:43 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.444 17:16:43 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:00.444 17:16:43 ublk -- scripts/common.sh@353 -- # local d=1 00:13:00.444 17:16:43 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.444 17:16:43 ublk -- scripts/common.sh@355 -- # echo 1 00:13:00.444 17:16:43 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:00.444 17:16:43 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:00.444 17:16:43 ublk -- scripts/common.sh@353 -- # local d=2 00:13:00.444 17:16:43 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.444 17:16:43 ublk -- scripts/common.sh@355 -- # echo 2 00:13:00.444 17:16:43 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:00.444 17:16:43 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:00.444 17:16:43 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:00.444 17:16:43 ublk -- scripts/common.sh@368 -- # return 0 00:13:00.444 17:16:43 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.444 17:16:43 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:00.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.444 --rc genhtml_branch_coverage=1 00:13:00.444 --rc genhtml_function_coverage=1 00:13:00.444 --rc genhtml_legend=1 00:13:00.444 --rc geninfo_all_blocks=1 00:13:00.444 --rc geninfo_unexecuted_blocks=1 00:13:00.444 00:13:00.444 ' 00:13:00.444 17:16:43 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:00.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.444 --rc genhtml_branch_coverage=1 00:13:00.444 --rc genhtml_function_coverage=1 00:13:00.444 --rc genhtml_legend=1 00:13:00.444 --rc geninfo_all_blocks=1 00:13:00.444 --rc geninfo_unexecuted_blocks=1 00:13:00.444 00:13:00.444 ' 00:13:00.444 17:16:43 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:00.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.444 --rc genhtml_branch_coverage=1 00:13:00.444 --rc genhtml_function_coverage=1 00:13:00.444 --rc genhtml_legend=1 00:13:00.444 --rc geninfo_all_blocks=1 00:13:00.444 --rc geninfo_unexecuted_blocks=1 00:13:00.444 00:13:00.444 ' 00:13:00.444 17:16:43 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:00.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.444 --rc genhtml_branch_coverage=1 00:13:00.444 --rc genhtml_function_coverage=1 00:13:00.444 --rc genhtml_legend=1 00:13:00.444 --rc geninfo_all_blocks=1 00:13:00.444 --rc geninfo_unexecuted_blocks=1 00:13:00.444 00:13:00.444 ' 00:13:00.444 17:16:43 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:00.444 17:16:43 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:00.444 17:16:43 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:00.444 17:16:43 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:00.444 17:16:43 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:00.444 17:16:43 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:00.444 17:16:43 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:00.444 17:16:43 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:00.444 17:16:43 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:00.444 17:16:43 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:00.444 17:16:43 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:00.444 17:16:43 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:00.444 17:16:43 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:00.444 17:16:43 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:00.444 17:16:43 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:00.444 17:16:43 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:00.444 17:16:43 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:00.444 17:16:43 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:00.444 17:16:43 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:00.444 17:16:43 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:00.444 17:16:43 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:00.444 17:16:43 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:00.444 17:16:43 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:00.444 ************************************ 00:13:00.444 START TEST test_save_ublk_config 00:13:00.444 ************************************ 00:13:00.444 17:16:43 ublk.test_save_ublk_config -- common/autotest_common.sh@1127 -- # test_save_config 00:13:00.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.444 17:16:43 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:00.444 17:16:43 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70615 00:13:00.444 17:16:43 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:00.444 17:16:43 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70615 00:13:00.444 17:16:43 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 70615 ']' 00:13:00.444 17:16:43 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.444 17:16:43 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:00.444 17:16:43 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.444 17:16:43 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:00.444 17:16:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:00.444 17:16:43 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:00.705 [2024-10-30 17:16:43.466673] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:13:00.705 [2024-10-30 17:16:43.466779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70615 ] 00:13:00.705 [2024-10-30 17:16:43.623999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.965 [2024-10-30 17:16:43.745426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.537 17:16:44 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:01.538 17:16:44 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:13:01.538 17:16:44 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:01.538 17:16:44 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:01.538 17:16:44 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.538 17:16:44 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:01.538 [2024-10-30 17:16:44.469225] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:01.538 [2024-10-30 17:16:44.470155] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:01.799 malloc0 00:13:01.799 [2024-10-30 17:16:44.540363] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:01.799 [2024-10-30 17:16:44.540461] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:01.799 [2024-10-30 17:16:44.540472] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:01.799 [2024-10-30 17:16:44.540480] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:01.799 [2024-10-30 17:16:44.550333] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:01.799 [2024-10-30 17:16:44.550368] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:01.799 [2024-10-30 17:16:44.557260] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:01.799 [2024-10-30 17:16:44.557390] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:01.799 [2024-10-30 17:16:44.574230] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:01.799 0 00:13:01.799 17:16:44 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.799 17:16:44 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:01.799 17:16:44 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.799 17:16:44 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:02.061 17:16:44 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.061 17:16:44 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:02.061 "subsystems": [ 00:13:02.061 { 00:13:02.061 "subsystem": "fsdev", 00:13:02.061 "config": [ 00:13:02.061 { 00:13:02.061 "method": "fsdev_set_opts", 00:13:02.061 "params": { 00:13:02.061 "fsdev_io_pool_size": 65535, 00:13:02.061 "fsdev_io_cache_size": 256 00:13:02.061 } 00:13:02.061 } 00:13:02.061 ] 00:13:02.061 }, 00:13:02.061 { 00:13:02.061 "subsystem": "keyring", 00:13:02.061 "config": [] 00:13:02.061 }, 00:13:02.061 { 00:13:02.061 "subsystem": "iobuf", 00:13:02.061 "config": [ 00:13:02.061 { 00:13:02.061 "method": "iobuf_set_options", 00:13:02.061 "params": { 00:13:02.061 "small_pool_count": 8192, 00:13:02.061 "large_pool_count": 1024, 00:13:02.061 "small_bufsize": 8192, 00:13:02.061 "large_bufsize": 135168, 00:13:02.061 "enable_numa": false 00:13:02.061 } 00:13:02.061 } 00:13:02.061 ] 00:13:02.061 }, 00:13:02.061 { 00:13:02.061 "subsystem": "sock", 00:13:02.061 "config": [ 00:13:02.061 { 00:13:02.061 "method": "sock_set_default_impl", 00:13:02.061 "params": { 00:13:02.061 "impl_name": "posix" 00:13:02.061 } 00:13:02.061 }, 00:13:02.061 { 00:13:02.061 "method": "sock_impl_set_options", 00:13:02.061 "params": { 00:13:02.061 "impl_name": "ssl", 00:13:02.061 "recv_buf_size": 4096, 00:13:02.061 "send_buf_size": 4096, 00:13:02.061 "enable_recv_pipe": true, 00:13:02.061 "enable_quickack": false, 00:13:02.061 "enable_placement_id": 0, 00:13:02.061 "enable_zerocopy_send_server": true, 00:13:02.061 "enable_zerocopy_send_client": false, 00:13:02.061 "zerocopy_threshold": 0, 00:13:02.061 "tls_version": 0, 00:13:02.061 "enable_ktls": false 00:13:02.061 } 00:13:02.061 }, 00:13:02.061 { 00:13:02.061 "method": "sock_impl_set_options", 00:13:02.061 "params": { 00:13:02.061 "impl_name": "posix", 00:13:02.061 "recv_buf_size": 2097152, 00:13:02.061 "send_buf_size": 2097152, 00:13:02.061 "enable_recv_pipe": true, 00:13:02.061 "enable_quickack": false, 00:13:02.061 "enable_placement_id": 0, 00:13:02.061 "enable_zerocopy_send_server": true, 00:13:02.061 "enable_zerocopy_send_client": false, 00:13:02.062 "zerocopy_threshold": 0, 00:13:02.062 "tls_version": 0, 00:13:02.062 "enable_ktls": false 00:13:02.062 } 00:13:02.062 } 00:13:02.062 ] 00:13:02.062 }, 00:13:02.062 { 00:13:02.062 "subsystem": "vmd", 00:13:02.062 "config": [] 00:13:02.062 }, 00:13:02.062 { 00:13:02.062 "subsystem": "accel", 00:13:02.062 "config": [ 00:13:02.062 { 00:13:02.062 "method": "accel_set_options", 00:13:02.062 "params": { 00:13:02.062 "small_cache_size": 128, 00:13:02.062 "large_cache_size": 16, 00:13:02.062 "task_count": 2048, 00:13:02.062 "sequence_count": 2048, 00:13:02.062 "buf_count": 2048 00:13:02.062 } 00:13:02.062 } 00:13:02.062 ] 00:13:02.062 }, 00:13:02.062 { 00:13:02.062 "subsystem": "bdev", 00:13:02.062 "config": [ 00:13:02.062 { 00:13:02.062 "method": "bdev_set_options", 00:13:02.062 "params": { 00:13:02.062 "bdev_io_pool_size": 65535, 00:13:02.062 "bdev_io_cache_size": 256, 00:13:02.062 "bdev_auto_examine": true, 00:13:02.062 "iobuf_small_cache_size": 128, 00:13:02.062 "iobuf_large_cache_size": 16 00:13:02.062 } 00:13:02.062 }, 00:13:02.062 { 00:13:02.062 "method": "bdev_raid_set_options", 00:13:02.062 "params": { 00:13:02.062 "process_window_size_kb": 1024, 00:13:02.062 "process_max_bandwidth_mb_sec": 0 00:13:02.062 } 00:13:02.062 }, 00:13:02.062 { 00:13:02.062 "method": "bdev_iscsi_set_options", 00:13:02.062 "params": { 00:13:02.062 "timeout_sec": 30 00:13:02.062 } 00:13:02.062 }, 00:13:02.062 { 00:13:02.062 "method": "bdev_nvme_set_options", 00:13:02.062 "params": { 00:13:02.062 "action_on_timeout": "none", 00:13:02.062 "timeout_us": 0, 00:13:02.062 "timeout_admin_us": 0, 00:13:02.062 "keep_alive_timeout_ms": 10000, 00:13:02.062 "arbitration_burst": 0, 00:13:02.062 "low_priority_weight": 0, 00:13:02.062 "medium_priority_weight": 0, 00:13:02.062 "high_priority_weight": 0, 00:13:02.062 "nvme_adminq_poll_period_us": 10000, 00:13:02.062 "nvme_ioq_poll_period_us": 0, 00:13:02.062 "io_queue_requests": 0, 00:13:02.062 "delay_cmd_submit": true, 00:13:02.062 "transport_retry_count": 4, 00:13:02.062 "bdev_retry_count": 3, 00:13:02.062 "transport_ack_timeout": 0, 00:13:02.062 "ctrlr_loss_timeout_sec": 0, 00:13:02.062 "reconnect_delay_sec": 0, 00:13:02.062 "fast_io_fail_timeout_sec": 0, 00:13:02.062 "disable_auto_failback": false, 00:13:02.062 "generate_uuids": false, 00:13:02.062 "transport_tos": 0, 00:13:02.062 "nvme_error_stat": false, 00:13:02.062 "rdma_srq_size": 0, 00:13:02.062 "io_path_stat": false, 00:13:02.062 "allow_accel_sequence": false, 00:13:02.062 "rdma_max_cq_size": 0, 00:13:02.062 "rdma_cm_event_timeout_ms": 0, 00:13:02.062 "dhchap_digests": [ 00:13:02.062 "sha256", 00:13:02.062 "sha384", 00:13:02.062 "sha512" 00:13:02.062 ], 00:13:02.062 "dhchap_dhgroups": [ 00:13:02.062 "null", 00:13:02.062 "ffdhe2048", 00:13:02.062 "ffdhe3072", 00:13:02.062 "ffdhe4096", 00:13:02.062 "ffdhe6144", 00:13:02.062 "ffdhe8192" 00:13:02.062 ] 00:13:02.062 } 00:13:02.062 }, 00:13:02.062 { 00:13:02.062 "method": "bdev_nvme_set_hotplug", 00:13:02.062 "params": { 00:13:02.062 "period_us": 100000, 00:13:02.062 "enable": false 00:13:02.062 } 00:13:02.062 }, 00:13:02.062 { 00:13:02.062 "method": "bdev_malloc_create", 00:13:02.062 "params": { 00:13:02.062 "name": "malloc0", 00:13:02.062 "num_blocks": 8192, 00:13:02.062 "block_size": 4096, 00:13:02.062 "physical_block_size": 4096, 00:13:02.062 "uuid": "c9c02d7e-c826-4fd5-96fd-6f43275b5ad1", 00:13:02.062 "optimal_io_boundary": 0, 00:13:02.062 "md_size": 0, 00:13:02.062 "dif_type": 0, 00:13:02.062 "dif_is_head_of_md": false, 00:13:02.062 "dif_pi_format": 0 00:13:02.062 } 00:13:02.062 }, 00:13:02.062 { 00:13:02.062 "method": "bdev_wait_for_examine" 00:13:02.062 } 00:13:02.062 ] 00:13:02.062 }, 00:13:02.062 { 00:13:02.062 "subsystem": "scsi", 00:13:02.062 "config": null 00:13:02.062 }, 00:13:02.062 { 00:13:02.062 "subsystem": "scheduler", 00:13:02.062 "config": [ 00:13:02.062 { 00:13:02.062 "method": "framework_set_scheduler", 00:13:02.062 "params": { 00:13:02.062 "name": "static" 00:13:02.062 } 00:13:02.062 } 00:13:02.062 ] 00:13:02.062 }, 00:13:02.062 { 00:13:02.062 "subsystem": "vhost_scsi", 00:13:02.062 "config": [] 00:13:02.062 }, 00:13:02.062 { 00:13:02.062 "subsystem": "vhost_blk", 00:13:02.062 "config": [] 00:13:02.062 }, 00:13:02.062 { 00:13:02.062 "subsystem": "ublk", 00:13:02.062 "config": [ 00:13:02.062 { 00:13:02.062 "method": "ublk_create_target", 00:13:02.062 "params": { 00:13:02.062 "cpumask": "1" 00:13:02.062 } 00:13:02.062 }, 00:13:02.062 { 00:13:02.062 "method": "ublk_start_disk", 00:13:02.062 "params": { 00:13:02.062 "bdev_name": "malloc0", 00:13:02.062 "ublk_id": 0, 00:13:02.062 "num_queues": 1, 00:13:02.062 "queue_depth": 128 00:13:02.062 } 00:13:02.062 } 00:13:02.062 ] 00:13:02.062 }, 00:13:02.062 { 00:13:02.062 "subsystem": "nbd", 00:13:02.062 "config": [] 00:13:02.062 }, 00:13:02.062 { 00:13:02.062 "subsystem": "nvmf", 00:13:02.062 "config": [ 00:13:02.062 { 00:13:02.062 "method": "nvmf_set_config", 00:13:02.062 "params": { 00:13:02.062 "discovery_filter": "match_any", 00:13:02.062 "admin_cmd_passthru": { 00:13:02.062 "identify_ctrlr": false 00:13:02.062 }, 00:13:02.062 "dhchap_digests": [ 00:13:02.062 "sha256", 00:13:02.062 "sha384", 00:13:02.062 "sha512" 00:13:02.062 ], 00:13:02.062 "dhchap_dhgroups": [ 00:13:02.062 "null", 00:13:02.062 "ffdhe2048", 00:13:02.062 "ffdhe3072", 00:13:02.062 "ffdhe4096", 00:13:02.062 "ffdhe6144", 00:13:02.062 "ffdhe8192" 00:13:02.062 ] 00:13:02.062 } 00:13:02.062 }, 00:13:02.062 { 00:13:02.062 "method": "nvmf_set_max_subsystems", 00:13:02.062 "params": { 00:13:02.062 "max_subsystems": 1024 00:13:02.062 } 00:13:02.062 }, 00:13:02.062 { 00:13:02.062 "method": "nvmf_set_crdt", 00:13:02.062 "params": { 00:13:02.062 "crdt1": 0, 00:13:02.062 "crdt2": 0, 00:13:02.062 "crdt3": 0 00:13:02.062 } 00:13:02.062 } 00:13:02.062 ] 00:13:02.062 }, 00:13:02.062 { 00:13:02.062 "subsystem": "iscsi", 00:13:02.062 "config": [ 00:13:02.062 { 00:13:02.062 "method": "iscsi_set_options", 00:13:02.062 "params": { 00:13:02.062 "node_base": "iqn.2016-06.io.spdk", 00:13:02.062 "max_sessions": 128, 00:13:02.062 "max_connections_per_session": 2, 00:13:02.062 "max_queue_depth": 64, 00:13:02.062 "default_time2wait": 2, 00:13:02.062 "default_time2retain": 20, 00:13:02.062 "first_burst_length": 8192, 00:13:02.062 "immediate_data": true, 00:13:02.062 "allow_duplicated_isid": false, 00:13:02.062 "error_recovery_level": 0, 00:13:02.062 "nop_timeout": 60, 00:13:02.062 "nop_in_interval": 30, 00:13:02.062 "disable_chap": false, 00:13:02.062 "require_chap": false, 00:13:02.062 "mutual_chap": false, 00:13:02.062 "chap_group": 0, 00:13:02.062 "max_large_datain_per_connection": 64, 00:13:02.062 "max_r2t_per_connection": 4, 00:13:02.062 "pdu_pool_size": 36864, 00:13:02.062 "immediate_data_pool_size": 16384, 00:13:02.062 "data_out_pool_size": 2048 00:13:02.062 } 00:13:02.062 } 00:13:02.062 ] 00:13:02.062 } 00:13:02.062 ] 00:13:02.062 }' 00:13:02.062 17:16:44 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70615 00:13:02.062 17:16:44 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 70615 ']' 00:13:02.062 17:16:44 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 70615 00:13:02.062 17:16:44 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:13:02.062 17:16:44 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:02.062 17:16:44 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70615 00:13:02.062 killing process with pid 70615 00:13:02.062 17:16:44 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:02.062 17:16:44 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:02.062 17:16:44 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70615' 00:13:02.062 17:16:44 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 70615 00:13:02.062 17:16:44 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 70615 00:13:03.049 [2024-10-30 17:16:45.878997] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:03.049 [2024-10-30 17:16:45.924230] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:03.049 [2024-10-30 17:16:45.924337] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:03.049 [2024-10-30 17:16:45.933215] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:03.049 [2024-10-30 17:16:45.933268] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:03.049 [2024-10-30 17:16:45.933278] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:03.049 [2024-10-30 17:16:45.933296] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:03.049 [2024-10-30 17:16:45.933402] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:04.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.457 17:16:47 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=70670 00:13:04.457 17:16:47 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 70670 00:13:04.457 17:16:47 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 70670 ']' 00:13:04.457 17:16:47 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.457 17:16:47 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:04.457 17:16:47 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.457 17:16:47 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:04.457 17:16:47 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:04.457 17:16:47 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:04.458 17:16:47 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:04.458 "subsystems": [ 00:13:04.458 { 00:13:04.458 "subsystem": "fsdev", 00:13:04.458 "config": [ 00:13:04.458 { 00:13:04.458 "method": "fsdev_set_opts", 00:13:04.458 "params": { 00:13:04.458 "fsdev_io_pool_size": 65535, 00:13:04.458 "fsdev_io_cache_size": 256 00:13:04.458 } 00:13:04.458 } 00:13:04.458 ] 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "subsystem": "keyring", 00:13:04.458 "config": [] 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "subsystem": "iobuf", 00:13:04.458 "config": [ 00:13:04.458 { 00:13:04.458 "method": "iobuf_set_options", 00:13:04.458 "params": { 00:13:04.458 "small_pool_count": 8192, 00:13:04.458 "large_pool_count": 1024, 00:13:04.458 "small_bufsize": 8192, 00:13:04.458 "large_bufsize": 135168, 00:13:04.458 "enable_numa": false 00:13:04.458 } 00:13:04.458 } 00:13:04.458 ] 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "subsystem": "sock", 00:13:04.458 "config": [ 00:13:04.458 { 00:13:04.458 "method": "sock_set_default_impl", 00:13:04.458 "params": { 00:13:04.458 "impl_name": "posix" 00:13:04.458 } 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "method": "sock_impl_set_options", 00:13:04.458 "params": { 00:13:04.458 "impl_name": "ssl", 00:13:04.458 "recv_buf_size": 4096, 00:13:04.458 "send_buf_size": 4096, 00:13:04.458 "enable_recv_pipe": true, 00:13:04.458 "enable_quickack": false, 00:13:04.458 "enable_placement_id": 0, 00:13:04.458 "enable_zerocopy_send_server": true, 00:13:04.458 "enable_zerocopy_send_client": false, 00:13:04.458 "zerocopy_threshold": 0, 00:13:04.458 "tls_version": 0, 00:13:04.458 "enable_ktls": false 00:13:04.458 } 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "method": "sock_impl_set_options", 00:13:04.458 "params": { 00:13:04.458 "impl_name": "posix", 00:13:04.458 "recv_buf_size": 2097152, 00:13:04.458 "send_buf_size": 2097152, 00:13:04.458 "enable_recv_pipe": true, 00:13:04.458 "enable_quickack": false, 00:13:04.458 "enable_placement_id": 0, 00:13:04.458 "enable_zerocopy_send_server": true, 00:13:04.458 "enable_zerocopy_send_client": false, 00:13:04.458 "zerocopy_threshold": 0, 00:13:04.458 "tls_version": 0, 00:13:04.458 "enable_ktls": false 00:13:04.458 } 00:13:04.458 } 00:13:04.458 ] 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "subsystem": "vmd", 00:13:04.458 "config": [] 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "subsystem": "accel", 00:13:04.458 "config": [ 00:13:04.458 { 00:13:04.458 "method": "accel_set_options", 00:13:04.458 "params": { 00:13:04.458 "small_cache_size": 128, 00:13:04.458 "large_cache_size": 16, 00:13:04.458 "task_count": 2048, 00:13:04.458 "sequence_count": 2048, 00:13:04.458 "buf_count": 2048 00:13:04.458 } 00:13:04.458 } 00:13:04.458 ] 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "subsystem": "bdev", 00:13:04.458 "config": [ 00:13:04.458 { 00:13:04.458 "method": "bdev_set_options", 00:13:04.458 "params": { 00:13:04.458 "bdev_io_pool_size": 65535, 00:13:04.458 "bdev_io_cache_size": 256, 00:13:04.458 "bdev_auto_examine": true, 00:13:04.458 "iobuf_small_cache_size": 128, 00:13:04.458 "iobuf_large_cache_size": 16 00:13:04.458 } 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "method": "bdev_raid_set_options", 00:13:04.458 "params": { 00:13:04.458 "process_window_size_kb": 1024, 00:13:04.458 "process_max_bandwidth_mb_sec": 0 00:13:04.458 } 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "method": "bdev_iscsi_set_options", 00:13:04.458 "params": { 00:13:04.458 "timeout_sec": 30 00:13:04.458 } 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "method": "bdev_nvme_set_options", 00:13:04.458 "params": { 00:13:04.458 "action_on_timeout": "none", 00:13:04.458 "timeout_us": 0, 00:13:04.458 "timeout_admin_us": 0, 00:13:04.458 "keep_alive_timeout_ms": 10000, 00:13:04.458 "arbitration_burst": 0, 00:13:04.458 "low_priority_weight": 0, 00:13:04.458 "medium_priority_weight": 0, 00:13:04.458 "high_priority_weight": 0, 00:13:04.458 "nvme_adminq_poll_period_us": 10000, 00:13:04.458 "nvme_ioq_poll_period_us": 0, 00:13:04.458 "io_queue_requests": 0, 00:13:04.458 "delay_cmd_submit": true, 00:13:04.458 "transport_retry_count": 4, 00:13:04.458 "bdev_retry_count": 3, 00:13:04.458 "transport_ack_timeout": 0, 00:13:04.458 "ctrlr_loss_timeout_sec": 0, 00:13:04.458 "reconnect_delay_sec": 0, 00:13:04.458 "fast_io_fail_timeout_sec": 0, 00:13:04.458 "disable_auto_failback": false, 00:13:04.458 "generate_uuids": false, 00:13:04.458 "transport_tos": 0, 00:13:04.458 "nvme_error_stat": false, 00:13:04.458 "rdma_srq_size": 0, 00:13:04.458 "io_path_stat": false, 00:13:04.458 "allow_accel_sequence": false, 00:13:04.458 "rdma_max_cq_size": 0, 00:13:04.458 "rdma_cm_event_timeout_ms": 0, 00:13:04.458 "dhchap_digests": [ 00:13:04.458 "sha256", 00:13:04.458 "sha384", 00:13:04.458 "sha512" 00:13:04.458 ], 00:13:04.458 "dhchap_dhgroups": [ 00:13:04.458 "null", 00:13:04.458 "ffdhe2048", 00:13:04.458 "ffdhe3072", 00:13:04.458 "ffdhe4096", 00:13:04.458 "ffdhe6144", 00:13:04.458 "ffdhe8192" 00:13:04.458 ] 00:13:04.458 } 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "method": "bdev_nvme_set_hotplug", 00:13:04.458 "params": { 00:13:04.458 "period_us": 100000, 00:13:04.458 "enable": false 00:13:04.458 } 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "method": "bdev_malloc_create", 00:13:04.458 "params": { 00:13:04.458 "name": "malloc0", 00:13:04.458 "num_blocks": 8192, 00:13:04.458 "block_size": 4096, 00:13:04.458 "physical_block_size": 4096, 00:13:04.458 "uuid": "c9c02d7e-c826-4fd5-96fd-6f43275b5ad1", 00:13:04.458 "optimal_io_boundary": 0, 00:13:04.458 "md_size": 0, 00:13:04.458 "dif_type": 0, 00:13:04.458 "dif_is_head_of_md": false, 00:13:04.458 "dif_pi_format": 0 00:13:04.458 } 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "method": "bdev_wait_for_examine" 00:13:04.458 } 00:13:04.458 ] 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "subsystem": "scsi", 00:13:04.458 "config": null 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "subsystem": "scheduler", 00:13:04.458 "config": [ 00:13:04.458 { 00:13:04.458 "method": "framework_set_scheduler", 00:13:04.458 "params": { 00:13:04.458 "name": "static" 00:13:04.458 } 00:13:04.458 } 00:13:04.458 ] 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "subsystem": "vhost_scsi", 00:13:04.458 "config": [] 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "subsystem": "vhost_blk", 00:13:04.458 "config": [] 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "subsystem": "ublk", 00:13:04.458 "config": [ 00:13:04.458 { 00:13:04.458 "method": "ublk_create_target", 00:13:04.458 "params": { 00:13:04.458 "cpumask": "1" 00:13:04.458 } 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "method": "ublk_start_disk", 00:13:04.458 "params": { 00:13:04.458 "bdev_name": "malloc0", 00:13:04.458 "ublk_id": 0, 00:13:04.458 "num_queues": 1, 00:13:04.458 "queue_depth": 128 00:13:04.458 } 00:13:04.458 } 00:13:04.458 ] 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "subsystem": "nbd", 00:13:04.458 "config": [] 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "subsystem": "nvmf", 00:13:04.458 "config": [ 00:13:04.458 { 00:13:04.458 "method": "nvmf_set_config", 00:13:04.458 "params": { 00:13:04.458 "discovery_filter": "match_any", 00:13:04.458 "admin_cmd_passthru": { 00:13:04.458 "identify_ctrlr": false 00:13:04.458 }, 00:13:04.458 "dhchap_digests": [ 00:13:04.458 "sha256", 00:13:04.458 "sha384", 00:13:04.458 "sha512" 00:13:04.458 ], 00:13:04.458 "dhchap_dhgroups": [ 00:13:04.458 "null", 00:13:04.458 "ffdhe2048", 00:13:04.458 "ffdhe3072", 00:13:04.458 "ffdhe4096", 00:13:04.458 "ffdhe6144", 00:13:04.458 "ffdhe8192" 00:13:04.458 ] 00:13:04.458 } 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "method": "nvmf_set_max_subsystems", 00:13:04.458 "params": { 00:13:04.458 "max_subsystems": 1024 00:13:04.458 } 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "method": "nvmf_set_crdt", 00:13:04.458 "params": { 00:13:04.458 "crdt1": 0, 00:13:04.458 "crdt2": 0, 00:13:04.458 "crdt3": 0 00:13:04.458 } 00:13:04.458 } 00:13:04.458 ] 00:13:04.458 }, 00:13:04.458 { 00:13:04.458 "subsystem": "iscsi", 00:13:04.458 "config": [ 00:13:04.458 { 00:13:04.458 "method": "iscsi_set_options", 00:13:04.458 "params": { 00:13:04.458 "node_base": "iqn.2016-06.io.spdk", 00:13:04.459 "max_sessions": 128, 00:13:04.459 "max_connections_per_session": 2, 00:13:04.459 "max_queue_depth": 64, 00:13:04.459 "default_time2wait": 2, 00:13:04.459 "default_time2retain": 20, 00:13:04.459 "first_burst_length": 8192, 00:13:04.459 "immediate_data": true, 00:13:04.459 "allow_duplicated_isid": false, 00:13:04.459 "error_recovery_level": 0, 00:13:04.459 "nop_timeout": 60, 00:13:04.459 "nop_in_interval": 30, 00:13:04.459 "disable_chap": false, 00:13:04.459 "require_chap": false, 00:13:04.459 "mutual_chap": false, 00:13:04.459 "chap_group": 0, 00:13:04.459 "max_large_datain_per_connection": 64, 00:13:04.459 "max_r2t_per_connection": 4, 00:13:04.459 "pdu_pool_size": 36864, 00:13:04.459 "immediate_data_pool_size": 16384, 00:13:04.459 "data_out_pool_size": 2048 00:13:04.459 } 00:13:04.459 } 00:13:04.459 ] 00:13:04.459 } 00:13:04.459 ] 00:13:04.459 }' 00:13:04.459 [2024-10-30 17:16:47.169646] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:13:04.459 [2024-10-30 17:16:47.169934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70670 ] 00:13:04.459 [2024-10-30 17:16:47.324975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.459 [2024-10-30 17:16:47.412565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.401 [2024-10-30 17:16:48.047215] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:05.401 [2024-10-30 17:16:48.047845] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:05.401 [2024-10-30 17:16:48.055304] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:05.401 [2024-10-30 17:16:48.055365] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:05.401 [2024-10-30 17:16:48.055372] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:05.401 [2024-10-30 17:16:48.055377] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:05.401 [2024-10-30 17:16:48.064265] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:05.401 [2024-10-30 17:16:48.064280] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:05.401 [2024-10-30 17:16:48.071219] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:05.401 [2024-10-30 17:16:48.071290] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:05.401 [2024-10-30 17:16:48.088218] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:05.401 17:16:48 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:05.401 17:16:48 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:13:05.401 17:16:48 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:05.401 17:16:48 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:05.401 17:16:48 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.401 17:16:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:05.401 17:16:48 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.401 17:16:48 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:05.401 17:16:48 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:05.401 17:16:48 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 70670 00:13:05.401 17:16:48 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 70670 ']' 00:13:05.401 17:16:48 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 70670 00:13:05.401 17:16:48 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:13:05.401 17:16:48 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:05.401 17:16:48 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70670 00:13:05.401 killing process with pid 70670 00:13:05.401 17:16:48 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:05.401 17:16:48 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:05.401 17:16:48 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70670' 00:13:05.401 17:16:48 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 70670 00:13:05.401 17:16:48 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 70670 00:13:06.344 [2024-10-30 17:16:49.173064] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:06.344 [2024-10-30 17:16:49.207232] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:06.344 [2024-10-30 17:16:49.207345] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:06.344 [2024-10-30 17:16:49.215224] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:06.344 [2024-10-30 17:16:49.215262] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:06.344 [2024-10-30 17:16:49.215268] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:06.344 [2024-10-30 17:16:49.215287] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:06.344 [2024-10-30 17:16:49.215397] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:07.728 17:16:50 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:07.728 00:13:07.728 real 0m6.988s 00:13:07.728 user 0m4.810s 00:13:07.728 sys 0m2.818s 00:13:07.728 ************************************ 00:13:07.728 END TEST test_save_ublk_config 00:13:07.728 ************************************ 00:13:07.728 17:16:50 ublk.test_save_ublk_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:07.728 17:16:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:07.728 17:16:50 ublk -- ublk/ublk.sh@139 -- # spdk_pid=70737 00:13:07.728 17:16:50 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:07.728 17:16:50 ublk -- ublk/ublk.sh@141 -- # waitforlisten 70737 00:13:07.728 17:16:50 ublk -- common/autotest_common.sh@833 -- # '[' -z 70737 ']' 00:13:07.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.728 17:16:50 ublk -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.728 17:16:50 ublk -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:07.728 17:16:50 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:07.728 17:16:50 ublk -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.728 17:16:50 ublk -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:07.728 17:16:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:07.728 [2024-10-30 17:16:50.508533] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:13:07.728 [2024-10-30 17:16:50.508652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70737 ] 00:13:07.728 [2024-10-30 17:16:50.669744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:07.989 [2024-10-30 17:16:50.793864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.989 [2024-10-30 17:16:50.793961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.563 17:16:51 ublk -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:08.563 17:16:51 ublk -- common/autotest_common.sh@866 -- # return 0 00:13:08.563 17:16:51 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:08.563 17:16:51 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:08.563 17:16:51 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:08.563 17:16:51 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:08.563 ************************************ 00:13:08.563 START TEST test_create_ublk 00:13:08.563 ************************************ 00:13:08.563 17:16:51 ublk.test_create_ublk -- common/autotest_common.sh@1127 -- # test_create_ublk 00:13:08.563 17:16:51 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:08.563 17:16:51 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.563 17:16:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:08.563 [2024-10-30 17:16:51.499227] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:08.563 [2024-10-30 17:16:51.501478] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:08.563 17:16:51 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.563 17:16:51 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:08.563 17:16:51 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:08.563 17:16:51 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.563 17:16:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:08.825 17:16:51 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.825 17:16:51 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:08.825 17:16:51 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:08.825 17:16:51 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.825 17:16:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:08.825 [2024-10-30 17:16:51.715396] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:08.826 [2024-10-30 17:16:51.715835] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:08.826 [2024-10-30 17:16:51.715854] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:08.826 [2024-10-30 17:16:51.715862] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:08.826 [2024-10-30 17:16:51.724541] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:08.826 [2024-10-30 17:16:51.724571] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:08.826 [2024-10-30 17:16:51.731240] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:08.826 [2024-10-30 17:16:51.743297] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:08.826 [2024-10-30 17:16:51.766242] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:08.826 17:16:51 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.826 17:16:51 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:08.826 17:16:51 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:08.826 17:16:51 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:08.826 17:16:51 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.826 17:16:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:08.826 17:16:51 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.826 17:16:51 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:08.826 { 00:13:08.826 "ublk_device": "/dev/ublkb0", 00:13:08.826 "id": 0, 00:13:08.826 "queue_depth": 512, 00:13:08.826 "num_queues": 4, 00:13:08.826 "bdev_name": "Malloc0" 00:13:08.826 } 00:13:08.826 ]' 00:13:08.826 17:16:51 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:09.087 17:16:51 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:09.087 17:16:51 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:09.087 17:16:51 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:09.087 17:16:51 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:09.087 17:16:51 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:09.087 17:16:51 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:09.087 17:16:51 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:09.087 17:16:51 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:09.087 17:16:51 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:09.087 17:16:51 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:09.087 17:16:51 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:09.087 17:16:51 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:09.087 17:16:51 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:09.087 17:16:51 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:09.087 17:16:51 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:09.087 17:16:51 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:09.087 17:16:51 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:09.087 17:16:51 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:09.087 17:16:51 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:09.087 17:16:51 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:09.087 17:16:51 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:09.087 fio: verification read phase will never start because write phase uses all of runtime 00:13:09.087 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:09.087 fio-3.35 00:13:09.087 Starting 1 process 00:13:19.243 00:13:19.243 fio_test: (groupid=0, jobs=1): err= 0: pid=70787: Wed Oct 30 17:17:02 2024 00:13:19.243 write: IOPS=18.9k, BW=73.8MiB/s (77.3MB/s)(738MiB/10001msec); 0 zone resets 00:13:19.243 clat (usec): min=34, max=7924, avg=52.20, stdev=115.34 00:13:19.243 lat (usec): min=35, max=7925, avg=52.64, stdev=115.36 00:13:19.243 clat percentiles (usec): 00:13:19.243 | 1.00th=[ 40], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 43], 00:13:19.243 | 30.00th=[ 45], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 49], 00:13:19.243 | 70.00th=[ 50], 80.00th=[ 51], 90.00th=[ 55], 95.00th=[ 59], 00:13:19.243 | 99.00th=[ 72], 99.50th=[ 83], 99.90th=[ 2245], 99.95th=[ 3294], 00:13:19.243 | 99.99th=[ 4015] 00:13:19.243 bw ( KiB/s): min=25856, max=83496, per=99.76%, avg=75344.42, stdev=12368.38, samples=19 00:13:19.243 iops : min= 6464, max=20874, avg=18836.11, stdev=3092.10, samples=19 00:13:19.243 lat (usec) : 50=74.24%, 100=25.40%, 250=0.15%, 500=0.04%, 750=0.01% 00:13:19.243 lat (usec) : 1000=0.01% 00:13:19.243 lat (msec) : 2=0.04%, 4=0.10%, 10=0.01% 00:13:19.243 cpu : usr=2.98%, sys=13.16%, ctx=188838, majf=0, minf=797 00:13:19.243 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:19.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:19.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:19.243 issued rwts: total=0,188838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:19.243 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:19.243 00:13:19.243 Run status group 0 (all jobs): 00:13:19.243 WRITE: bw=73.8MiB/s (77.3MB/s), 73.8MiB/s-73.8MiB/s (77.3MB/s-77.3MB/s), io=738MiB (773MB), run=10001-10001msec 00:13:19.243 00:13:19.243 Disk stats (read/write): 00:13:19.243 ublkb0: ios=0/186731, merge=0/0, ticks=0/8380, in_queue=8380, util=99.09% 00:13:19.243 17:17:02 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:19.243 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.243 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:19.243 [2024-10-30 17:17:02.195829] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:19.505 [2024-10-30 17:17:02.235666] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:19.505 [2024-10-30 17:17:02.236535] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:19.505 [2024-10-30 17:17:02.243232] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:19.505 [2024-10-30 17:17:02.243454] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:19.505 [2024-10-30 17:17:02.243467] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:19.505 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.505 17:17:02 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:19.505 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:13:19.505 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:19.505 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:19.505 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:19.505 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:19.505 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:19.505 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:19.505 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.505 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:19.505 [2024-10-30 17:17:02.259268] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:19.505 request: 00:13:19.505 { 00:13:19.505 "ublk_id": 0, 00:13:19.505 "method": "ublk_stop_disk", 00:13:19.505 "req_id": 1 00:13:19.505 } 00:13:19.505 Got JSON-RPC error response 00:13:19.505 response: 00:13:19.505 { 00:13:19.505 "code": -19, 00:13:19.505 "message": "No such device" 00:13:19.505 } 00:13:19.505 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:19.505 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:13:19.505 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:19.505 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:19.505 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:19.505 17:17:02 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:19.505 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.505 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:19.505 [2024-10-30 17:17:02.275282] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:19.505 [2024-10-30 17:17:02.278933] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:19.505 [2024-10-30 17:17:02.278967] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:19.505 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.505 17:17:02 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:19.506 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.506 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:19.768 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.768 17:17:02 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:19.768 17:17:02 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:19.768 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.768 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:19.768 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.768 17:17:02 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:19.768 17:17:02 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:13:19.768 17:17:02 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:19.768 17:17:02 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:19.768 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.768 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:19.768 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.768 17:17:02 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:19.768 17:17:02 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:13:19.768 ************************************ 00:13:19.768 END TEST test_create_ublk 00:13:19.768 ************************************ 00:13:19.768 17:17:02 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:19.768 00:13:19.768 real 0m11.246s 00:13:19.768 user 0m0.609s 00:13:19.768 sys 0m1.394s 00:13:19.768 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:19.768 17:17:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:20.029 17:17:02 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:20.029 17:17:02 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:20.029 17:17:02 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:20.029 17:17:02 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:20.029 ************************************ 00:13:20.029 START TEST test_create_multi_ublk 00:13:20.029 ************************************ 00:13:20.029 17:17:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@1127 -- # test_create_multi_ublk 00:13:20.029 17:17:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:20.029 17:17:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.029 17:17:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:20.029 [2024-10-30 17:17:02.783213] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:20.029 [2024-10-30 17:17:02.784674] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:20.029 17:17:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.029 17:17:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:13:20.029 17:17:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:13:20.029 17:17:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:20.030 17:17:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:20.030 17:17:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.030 17:17:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:20.030 17:17:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.030 17:17:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:20.030 17:17:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:20.030 17:17:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.030 17:17:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:20.030 [2024-10-30 17:17:02.995313] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:20.030 [2024-10-30 17:17:02.995619] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:20.030 [2024-10-30 17:17:02.995630] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:20.030 [2024-10-30 17:17:02.995638] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:20.030 [2024-10-30 17:17:03.007252] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:20.030 [2024-10-30 17:17:03.007273] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:20.291 [2024-10-30 17:17:03.019217] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:20.291 [2024-10-30 17:17:03.019699] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:20.291 [2024-10-30 17:17:03.048219] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:20.291 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.291 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:20.291 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:20.291 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:20.291 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.291 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:20.553 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.553 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:20.553 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:20.553 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.553 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:20.553 [2024-10-30 17:17:03.288307] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:20.553 [2024-10-30 17:17:03.288598] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:20.553 [2024-10-30 17:17:03.288611] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:20.553 [2024-10-30 17:17:03.288616] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:20.553 [2024-10-30 17:17:03.300234] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:20.553 [2024-10-30 17:17:03.300251] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:20.553 [2024-10-30 17:17:03.312220] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:20.553 [2024-10-30 17:17:03.312702] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:20.553 [2024-10-30 17:17:03.348220] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:20.553 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.553 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:20.553 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:20.553 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:20.553 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.553 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:20.815 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.815 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:20.815 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:20.815 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.815 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:20.815 [2024-10-30 17:17:03.624355] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:20.815 [2024-10-30 17:17:03.624760] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:20.815 [2024-10-30 17:17:03.624775] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:20.815 [2024-10-30 17:17:03.624784] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:20.815 [2024-10-30 17:17:03.636256] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:20.815 [2024-10-30 17:17:03.636281] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:20.815 [2024-10-30 17:17:03.646225] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:20.815 [2024-10-30 17:17:03.646898] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:20.815 [2024-10-30 17:17:03.663218] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:20.815 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.815 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:20.815 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:20.815 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:20.815 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.815 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:21.077 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.077 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:21.077 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:21.077 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.077 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:21.077 [2024-10-30 17:17:03.870364] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:21.077 [2024-10-30 17:17:03.870756] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:21.077 [2024-10-30 17:17:03.870772] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:21.077 [2024-10-30 17:17:03.870780] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:21.077 [2024-10-30 17:17:03.878257] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:21.077 [2024-10-30 17:17:03.878277] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:21.077 [2024-10-30 17:17:03.886232] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:21.077 [2024-10-30 17:17:03.886881] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:21.077 [2024-10-30 17:17:03.895281] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:21.077 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.077 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:21.077 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:21.077 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.077 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:21.077 17:17:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.077 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:21.077 { 00:13:21.077 "ublk_device": "/dev/ublkb0", 00:13:21.077 "id": 0, 00:13:21.077 "queue_depth": 512, 00:13:21.077 "num_queues": 4, 00:13:21.077 "bdev_name": "Malloc0" 00:13:21.077 }, 00:13:21.077 { 00:13:21.077 "ublk_device": "/dev/ublkb1", 00:13:21.077 "id": 1, 00:13:21.077 "queue_depth": 512, 00:13:21.077 "num_queues": 4, 00:13:21.077 "bdev_name": "Malloc1" 00:13:21.077 }, 00:13:21.077 { 00:13:21.077 "ublk_device": "/dev/ublkb2", 00:13:21.077 "id": 2, 00:13:21.077 "queue_depth": 512, 00:13:21.077 "num_queues": 4, 00:13:21.077 "bdev_name": "Malloc2" 00:13:21.077 }, 00:13:21.077 { 00:13:21.077 "ublk_device": "/dev/ublkb3", 00:13:21.077 "id": 3, 00:13:21.077 "queue_depth": 512, 00:13:21.077 "num_queues": 4, 00:13:21.077 "bdev_name": "Malloc3" 00:13:21.077 } 00:13:21.077 ]' 00:13:21.077 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:13:21.077 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:21.077 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:21.077 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:21.077 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:21.077 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:21.077 17:17:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:21.077 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:21.077 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:21.077 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:21.077 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:21.339 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:21.339 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:21.339 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:21.339 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:21.339 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:21.339 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:21.339 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:21.339 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:21.339 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:21.339 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:21.339 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:21.339 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:21.339 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:21.339 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:21.339 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:21.339 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:21.339 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:21.339 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:21.600 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:21.600 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:21.600 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:21.600 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:21.600 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:21.600 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:21.600 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:21.600 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:21.600 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:21.600 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:21.600 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:21.600 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:21.600 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:21.600 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:21.600 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:21.601 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:21.601 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:21.601 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:13:21.601 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:21.601 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:21.601 17:17:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.601 17:17:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:21.601 [2024-10-30 17:17:04.566362] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:21.863 [2024-10-30 17:17:04.598290] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:21.863 [2024-10-30 17:17:04.599318] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:21.863 [2024-10-30 17:17:04.606256] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:21.863 [2024-10-30 17:17:04.606543] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:21.863 [2024-10-30 17:17:04.606559] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:21.863 17:17:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.863 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:21.863 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:21.863 17:17:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.863 17:17:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:21.863 [2024-10-30 17:17:04.622310] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:21.863 [2024-10-30 17:17:04.669267] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:21.863 [2024-10-30 17:17:04.670279] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:21.863 [2024-10-30 17:17:04.679265] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:21.863 [2024-10-30 17:17:04.679546] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:21.863 [2024-10-30 17:17:04.679561] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:21.863 17:17:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.863 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:21.863 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:21.863 17:17:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.863 17:17:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:21.863 [2024-10-30 17:17:04.694317] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:21.863 [2024-10-30 17:17:04.739272] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:21.863 [2024-10-30 17:17:04.740179] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:21.863 [2024-10-30 17:17:04.751277] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:21.863 [2024-10-30 17:17:04.751556] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:21.863 [2024-10-30 17:17:04.751567] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:21.863 17:17:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.863 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:21.863 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:21.863 17:17:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.863 17:17:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:21.863 [2024-10-30 17:17:04.766309] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:21.863 [2024-10-30 17:17:04.810232] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:21.863 [2024-10-30 17:17:04.811070] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:21.863 [2024-10-30 17:17:04.818231] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:21.863 [2024-10-30 17:17:04.818496] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:21.863 [2024-10-30 17:17:04.818511] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:21.863 17:17:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.863 17:17:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:22.125 [2024-10-30 17:17:05.010286] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:22.125 [2024-10-30 17:17:05.018218] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:22.125 [2024-10-30 17:17:05.018250] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:22.125 17:17:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:13:22.125 17:17:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:22.125 17:17:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:22.125 17:17:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.125 17:17:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:22.699 17:17:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.699 17:17:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:22.699 17:17:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:22.699 17:17:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.699 17:17:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:22.958 17:17:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.958 17:17:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:22.958 17:17:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:22.958 17:17:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.958 17:17:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:23.218 17:17:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.218 17:17:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:23.218 17:17:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:23.218 17:17:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.218 17:17:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:23.480 17:17:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.480 17:17:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:23.480 17:17:06 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:23.480 17:17:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.480 17:17:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:23.480 17:17:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.480 17:17:06 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:23.480 17:17:06 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:13:23.480 17:17:06 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:23.480 17:17:06 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:23.480 17:17:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.480 17:17:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:23.480 17:17:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.480 17:17:06 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:23.480 17:17:06 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:13:23.742 ************************************ 00:13:23.742 END TEST test_create_multi_ublk 00:13:23.742 ************************************ 00:13:23.742 17:17:06 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:23.742 00:13:23.742 real 0m3.691s 00:13:23.742 user 0m0.815s 00:13:23.742 sys 0m0.143s 00:13:23.742 17:17:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:23.742 17:17:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:23.742 17:17:06 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:23.742 17:17:06 ublk -- ublk/ublk.sh@147 -- # cleanup 00:13:23.742 17:17:06 ublk -- ublk/ublk.sh@130 -- # killprocess 70737 00:13:23.742 17:17:06 ublk -- common/autotest_common.sh@952 -- # '[' -z 70737 ']' 00:13:23.742 17:17:06 ublk -- common/autotest_common.sh@956 -- # kill -0 70737 00:13:23.742 17:17:06 ublk -- common/autotest_common.sh@957 -- # uname 00:13:23.742 17:17:06 ublk -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:23.742 17:17:06 ublk -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70737 00:13:23.742 killing process with pid 70737 00:13:23.742 17:17:06 ublk -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:23.742 17:17:06 ublk -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:23.742 17:17:06 ublk -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70737' 00:13:23.742 17:17:06 ublk -- common/autotest_common.sh@971 -- # kill 70737 00:13:23.742 17:17:06 ublk -- common/autotest_common.sh@976 -- # wait 70737 00:13:24.313 [2024-10-30 17:17:07.236443] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:24.313 [2024-10-30 17:17:07.236511] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:25.256 00:13:25.256 real 0m24.863s 00:13:25.256 user 0m36.331s 00:13:25.256 sys 0m9.399s 00:13:25.256 17:17:08 ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:25.257 ************************************ 00:13:25.257 END TEST ublk 00:13:25.257 ************************************ 00:13:25.257 17:17:08 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.257 17:17:08 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:25.257 17:17:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:25.257 17:17:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:25.257 17:17:08 -- common/autotest_common.sh@10 -- # set +x 00:13:25.257 ************************************ 00:13:25.257 START TEST ublk_recovery 00:13:25.257 ************************************ 00:13:25.257 17:17:08 ublk_recovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:25.257 * Looking for test storage... 00:13:25.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:25.257 17:17:08 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:25.257 17:17:08 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:25.257 17:17:08 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:13:25.517 17:17:08 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:25.517 17:17:08 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:13:25.517 17:17:08 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:25.517 17:17:08 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:25.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.517 --rc genhtml_branch_coverage=1 00:13:25.517 --rc genhtml_function_coverage=1 00:13:25.517 --rc genhtml_legend=1 00:13:25.517 --rc geninfo_all_blocks=1 00:13:25.517 --rc geninfo_unexecuted_blocks=1 00:13:25.517 00:13:25.517 ' 00:13:25.517 17:17:08 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:25.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.517 --rc genhtml_branch_coverage=1 00:13:25.517 --rc genhtml_function_coverage=1 00:13:25.517 --rc genhtml_legend=1 00:13:25.517 --rc geninfo_all_blocks=1 00:13:25.517 --rc geninfo_unexecuted_blocks=1 00:13:25.517 00:13:25.517 ' 00:13:25.517 17:17:08 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:25.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.517 --rc genhtml_branch_coverage=1 00:13:25.517 --rc genhtml_function_coverage=1 00:13:25.517 --rc genhtml_legend=1 00:13:25.517 --rc geninfo_all_blocks=1 00:13:25.517 --rc geninfo_unexecuted_blocks=1 00:13:25.517 00:13:25.517 ' 00:13:25.517 17:17:08 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:25.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.517 --rc genhtml_branch_coverage=1 00:13:25.517 --rc genhtml_function_coverage=1 00:13:25.517 --rc genhtml_legend=1 00:13:25.517 --rc geninfo_all_blocks=1 00:13:25.517 --rc geninfo_unexecuted_blocks=1 00:13:25.517 00:13:25.517 ' 00:13:25.517 17:17:08 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:25.517 17:17:08 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:25.517 17:17:08 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:25.517 17:17:08 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:25.517 17:17:08 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:25.517 17:17:08 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:25.517 17:17:08 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:25.517 17:17:08 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:25.517 17:17:08 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:25.517 17:17:08 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:25.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.517 17:17:08 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71140 00:13:25.517 17:17:08 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:25.517 17:17:08 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71140 00:13:25.517 17:17:08 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 71140 ']' 00:13:25.517 17:17:08 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:25.517 17:17:08 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.517 17:17:08 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:25.517 17:17:08 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.517 17:17:08 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:25.517 17:17:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.517 [2024-10-30 17:17:08.386032] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:13:25.517 [2024-10-30 17:17:08.386179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71140 ] 00:13:25.778 [2024-10-30 17:17:08.553015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:25.778 [2024-10-30 17:17:08.685870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.778 [2024-10-30 17:17:08.685979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.352 17:17:09 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:26.352 17:17:09 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:13:26.352 17:17:09 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:26.352 17:17:09 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.352 17:17:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.352 [2024-10-30 17:17:09.282221] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:26.352 [2024-10-30 17:17:09.284041] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:26.352 17:17:09 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.352 17:17:09 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:26.352 17:17:09 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.352 17:17:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.614 malloc0 00:13:26.614 17:17:09 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.614 17:17:09 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:26.614 17:17:09 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.614 17:17:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.614 [2024-10-30 17:17:09.386352] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:26.614 [2024-10-30 17:17:09.386444] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:26.614 [2024-10-30 17:17:09.386455] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:26.614 [2024-10-30 17:17:09.386463] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:26.614 [2024-10-30 17:17:09.395303] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:26.614 [2024-10-30 17:17:09.395323] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:26.614 [2024-10-30 17:17:09.402228] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:26.614 [2024-10-30 17:17:09.402366] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:26.614 [2024-10-30 17:17:09.406901] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:26.614 1 00:13:26.614 17:17:09 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.614 17:17:09 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:27.562 17:17:10 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71186 00:13:27.562 17:17:10 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:27.562 17:17:10 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:27.562 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.562 fio-3.35 00:13:27.562 Starting 1 process 00:13:32.853 17:17:15 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71140 00:13:32.853 17:17:15 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:13:38.143 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71140 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:13:38.143 17:17:20 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71291 00:13:38.143 17:17:20 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:38.143 17:17:20 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:38.143 17:17:20 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71291 00:13:38.143 17:17:20 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 71291 ']' 00:13:38.143 17:17:20 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.143 17:17:20 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:38.143 17:17:20 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.143 17:17:20 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:38.143 17:17:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.143 [2024-10-30 17:17:20.511953] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:13:38.143 [2024-10-30 17:17:20.512069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71291 ] 00:13:38.143 [2024-10-30 17:17:20.670403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:38.143 [2024-10-30 17:17:20.774765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.143 [2024-10-30 17:17:20.774842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.716 17:17:21 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:38.716 17:17:21 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:13:38.716 17:17:21 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:13:38.716 17:17:21 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.716 17:17:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.716 [2024-10-30 17:17:21.466226] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:38.716 [2024-10-30 17:17:21.468533] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:38.716 17:17:21 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.716 17:17:21 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:38.716 17:17:21 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.716 17:17:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.716 malloc0 00:13:38.716 17:17:21 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.716 17:17:21 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:13:38.716 17:17:21 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.716 17:17:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.716 [2024-10-30 17:17:21.586388] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:13:38.716 [2024-10-30 17:17:21.586448] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:38.716 [2024-10-30 17:17:21.586460] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:38.716 [2024-10-30 17:17:21.593220] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:38.716 [2024-10-30 17:17:21.593254] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:13:38.716 1 00:13:38.716 17:17:21 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.716 17:17:21 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71186 00:13:39.660 [2024-10-30 17:17:22.593294] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:39.660 [2024-10-30 17:17:22.597238] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:39.660 [2024-10-30 17:17:22.597255] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:13:41.044 [2024-10-30 17:17:23.597280] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:41.044 [2024-10-30 17:17:23.605219] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:41.044 [2024-10-30 17:17:23.605243] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:13:41.987 [2024-10-30 17:17:24.605265] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:41.987 [2024-10-30 17:17:24.608227] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:41.987 [2024-10-30 17:17:24.608239] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:13:41.987 [2024-10-30 17:17:24.608247] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:13:41.987 [2024-10-30 17:17:24.608314] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:03.968 [2024-10-30 17:17:45.709227] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:03.968 [2024-10-30 17:17:45.715523] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:03.968 [2024-10-30 17:17:45.723398] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:03.968 [2024-10-30 17:17:45.723417] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:30.530 00:14:30.530 fio_test: (groupid=0, jobs=1): err= 0: pid=71189: Wed Oct 30 17:18:10 2024 00:14:30.530 read: IOPS=15.6k, BW=61.0MiB/s (64.0MB/s)(3660MiB/60001msec) 00:14:30.530 slat (nsec): min=931, max=783655, avg=4824.21, stdev=2302.32 00:14:30.530 clat (usec): min=501, max=30305k, avg=4254.22, stdev=261911.10 00:14:30.530 lat (usec): min=509, max=30305k, avg=4259.05, stdev=261911.10 00:14:30.530 clat percentiles (usec): 00:14:30.530 | 1.00th=[ 1614], 5.00th=[ 1745], 10.00th=[ 1778], 20.00th=[ 1811], 00:14:30.530 | 30.00th=[ 1827], 40.00th=[ 1844], 50.00th=[ 1844], 60.00th=[ 1860], 00:14:30.530 | 70.00th=[ 1876], 80.00th=[ 1909], 90.00th=[ 2024], 95.00th=[ 3032], 00:14:30.530 | 99.00th=[ 5014], 99.50th=[ 5538], 99.90th=[ 7111], 99.95th=[ 8160], 00:14:30.530 | 99.99th=[13042] 00:14:30.530 bw ( KiB/s): min=54952, max=131896, per=100.00%, avg=125237.02, stdev=14410.05, samples=59 00:14:30.530 iops : min=13738, max=32974, avg=31309.25, stdev=3602.51, samples=59 00:14:30.530 write: IOPS=15.6k, BW=60.9MiB/s (63.9MB/s)(3655MiB/60001msec); 0 zone resets 00:14:30.530 slat (nsec): min=926, max=1103.5k, avg=4871.20, stdev=2333.75 00:14:30.530 clat (usec): min=506, max=30305k, avg=3937.76, stdev=238575.15 00:14:30.530 lat (usec): min=510, max=30305k, avg=3942.63, stdev=238575.15 00:14:30.530 clat percentiles (usec): 00:14:30.530 | 1.00th=[ 1647], 5.00th=[ 1827], 10.00th=[ 1860], 20.00th=[ 1893], 00:14:30.530 | 30.00th=[ 1909], 40.00th=[ 1926], 50.00th=[ 1942], 60.00th=[ 1958], 00:14:30.530 | 70.00th=[ 1975], 80.00th=[ 1991], 90.00th=[ 2073], 95.00th=[ 2966], 00:14:30.530 | 99.00th=[ 5014], 99.50th=[ 5538], 99.90th=[ 7111], 99.95th=[ 8094], 00:14:30.530 | 99.99th=[12911] 00:14:30.530 bw ( KiB/s): min=54056, max=131464, per=100.00%, avg=125038.24, stdev=14564.39, samples=59 00:14:30.530 iops : min=13514, max=32866, avg=31259.56, stdev=3641.10, samples=59 00:14:30.530 lat (usec) : 750=0.02%, 1000=0.03% 00:14:30.530 lat (msec) : 2=85.65%, 4=11.61%, 10=2.65%, 20=0.03%, >=2000=0.01% 00:14:30.530 cpu : usr=3.46%, sys=15.56%, ctx=63951, majf=0, minf=14 00:14:30.530 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:30.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:30.530 issued rwts: total=936914,935614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:30.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:30.530 00:14:30.530 Run status group 0 (all jobs): 00:14:30.530 READ: bw=61.0MiB/s (64.0MB/s), 61.0MiB/s-61.0MiB/s (64.0MB/s-64.0MB/s), io=3660MiB (3838MB), run=60001-60001msec 00:14:30.530 WRITE: bw=60.9MiB/s (63.9MB/s), 60.9MiB/s-60.9MiB/s (63.9MB/s-63.9MB/s), io=3655MiB (3832MB), run=60001-60001msec 00:14:30.530 00:14:30.530 Disk stats (read/write): 00:14:30.530 ublkb1: ios=933759/932374, merge=0/0, ticks=3922406/3547642, in_queue=7470048, util=99.86% 00:14:30.530 17:18:10 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:30.530 17:18:10 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.530 17:18:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.530 [2024-10-30 17:18:10.683080] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:30.530 [2024-10-30 17:18:10.731230] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:30.530 [2024-10-30 17:18:10.731372] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:30.530 [2024-10-30 17:18:10.742211] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:30.530 [2024-10-30 17:18:10.742316] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:30.530 [2024-10-30 17:18:10.742324] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:30.530 17:18:10 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.530 17:18:10 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:30.530 17:18:10 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.530 17:18:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.530 [2024-10-30 17:18:10.747294] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:30.530 [2024-10-30 17:18:10.750968] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:30.530 [2024-10-30 17:18:10.751000] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:30.530 17:18:10 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.530 17:18:10 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:30.530 17:18:10 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:30.530 17:18:10 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71291 00:14:30.530 17:18:10 ublk_recovery -- common/autotest_common.sh@952 -- # '[' -z 71291 ']' 00:14:30.530 17:18:10 ublk_recovery -- common/autotest_common.sh@956 -- # kill -0 71291 00:14:30.530 17:18:10 ublk_recovery -- common/autotest_common.sh@957 -- # uname 00:14:30.530 17:18:10 ublk_recovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:30.530 17:18:10 ublk_recovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71291 00:14:30.530 17:18:10 ublk_recovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:30.530 17:18:10 ublk_recovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:30.530 killing process with pid 71291 00:14:30.530 17:18:10 ublk_recovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71291' 00:14:30.530 17:18:10 ublk_recovery -- common/autotest_common.sh@971 -- # kill 71291 00:14:30.530 17:18:10 ublk_recovery -- common/autotest_common.sh@976 -- # wait 71291 00:14:30.530 [2024-10-30 17:18:11.809398] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:30.530 [2024-10-30 17:18:11.809445] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:30.530 00:14:30.530 real 1m4.351s 00:14:30.530 user 1m47.274s 00:14:30.530 sys 0m22.231s 00:14:30.530 17:18:12 ublk_recovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:30.530 17:18:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.530 ************************************ 00:14:30.531 END TEST ublk_recovery 00:14:30.531 ************************************ 00:14:30.531 17:18:12 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:14:30.531 17:18:12 -- spdk/autotest.sh@256 -- # timing_exit lib 00:14:30.531 17:18:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:30.531 17:18:12 -- common/autotest_common.sh@10 -- # set +x 00:14:30.531 17:18:12 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:14:30.531 17:18:12 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:14:30.531 17:18:12 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:14:30.531 17:18:12 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:14:30.531 17:18:12 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:30.531 17:18:12 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:14:30.531 17:18:12 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:14:30.531 17:18:12 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:14:30.531 17:18:12 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:14:30.531 17:18:12 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:14:30.531 17:18:12 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:30.531 17:18:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:30.531 17:18:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:30.531 17:18:12 -- common/autotest_common.sh@10 -- # set +x 00:14:30.531 ************************************ 00:14:30.531 START TEST ftl 00:14:30.531 ************************************ 00:14:30.531 17:18:12 ftl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:30.531 * Looking for test storage... 00:14:30.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:30.531 17:18:12 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:30.531 17:18:12 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:14:30.531 17:18:12 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:30.531 17:18:12 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:30.531 17:18:12 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.531 17:18:12 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.531 17:18:12 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.531 17:18:12 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.531 17:18:12 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.531 17:18:12 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.531 17:18:12 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.531 17:18:12 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.531 17:18:12 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.531 17:18:12 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.531 17:18:12 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.531 17:18:12 ftl -- scripts/common.sh@344 -- # case "$op" in 00:14:30.531 17:18:12 ftl -- scripts/common.sh@345 -- # : 1 00:14:30.531 17:18:12 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.531 17:18:12 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.531 17:18:12 ftl -- scripts/common.sh@365 -- # decimal 1 00:14:30.531 17:18:12 ftl -- scripts/common.sh@353 -- # local d=1 00:14:30.531 17:18:12 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.531 17:18:12 ftl -- scripts/common.sh@355 -- # echo 1 00:14:30.531 17:18:12 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.531 17:18:12 ftl -- scripts/common.sh@366 -- # decimal 2 00:14:30.531 17:18:12 ftl -- scripts/common.sh@353 -- # local d=2 00:14:30.531 17:18:12 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.531 17:18:12 ftl -- scripts/common.sh@355 -- # echo 2 00:14:30.531 17:18:12 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.531 17:18:12 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.531 17:18:12 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.531 17:18:12 ftl -- scripts/common.sh@368 -- # return 0 00:14:30.531 17:18:12 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.531 17:18:12 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:30.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.531 --rc genhtml_branch_coverage=1 00:14:30.531 --rc genhtml_function_coverage=1 00:14:30.531 --rc genhtml_legend=1 00:14:30.531 --rc geninfo_all_blocks=1 00:14:30.531 --rc geninfo_unexecuted_blocks=1 00:14:30.531 00:14:30.531 ' 00:14:30.531 17:18:12 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:30.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.531 --rc genhtml_branch_coverage=1 00:14:30.531 --rc genhtml_function_coverage=1 00:14:30.531 --rc genhtml_legend=1 00:14:30.531 --rc geninfo_all_blocks=1 00:14:30.531 --rc geninfo_unexecuted_blocks=1 00:14:30.531 00:14:30.531 ' 00:14:30.531 17:18:12 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:30.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.531 --rc genhtml_branch_coverage=1 00:14:30.531 --rc genhtml_function_coverage=1 00:14:30.531 --rc genhtml_legend=1 00:14:30.531 --rc geninfo_all_blocks=1 00:14:30.531 --rc geninfo_unexecuted_blocks=1 00:14:30.531 00:14:30.531 ' 00:14:30.531 17:18:12 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:30.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.531 --rc genhtml_branch_coverage=1 00:14:30.531 --rc genhtml_function_coverage=1 00:14:30.531 --rc genhtml_legend=1 00:14:30.531 --rc geninfo_all_blocks=1 00:14:30.531 --rc geninfo_unexecuted_blocks=1 00:14:30.531 00:14:30.531 ' 00:14:30.531 17:18:12 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:30.531 17:18:12 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:30.531 17:18:12 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:30.531 17:18:12 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:30.531 17:18:12 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:30.531 17:18:12 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:30.531 17:18:12 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:30.531 17:18:12 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:30.531 17:18:12 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:30.531 17:18:12 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:30.531 17:18:12 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:30.531 17:18:12 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:30.531 17:18:12 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:30.531 17:18:12 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:30.531 17:18:12 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:30.531 17:18:12 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:30.531 17:18:12 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:30.531 17:18:12 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:30.531 17:18:12 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:30.531 17:18:12 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:30.531 17:18:12 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:30.531 17:18:12 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:30.531 17:18:12 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:30.531 17:18:12 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:30.531 17:18:12 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:30.531 17:18:12 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:30.531 17:18:12 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:30.531 17:18:12 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:30.531 17:18:12 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:30.531 17:18:12 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:30.531 17:18:12 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:30.531 17:18:12 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:30.531 17:18:12 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:30.531 17:18:12 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:30.532 17:18:12 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:30.532 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:30.532 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:30.532 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:30.532 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:30.532 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:30.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.532 17:18:13 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72118 00:14:30.532 17:18:13 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72118 00:14:30.532 17:18:13 ftl -- common/autotest_common.sh@833 -- # '[' -z 72118 ']' 00:14:30.532 17:18:13 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.532 17:18:13 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:30.532 17:18:13 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.532 17:18:13 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:30.532 17:18:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:14:30.532 17:18:13 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:30.532 [2024-10-30 17:18:13.301061] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:14:30.532 [2024-10-30 17:18:13.301221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72118 ] 00:14:30.532 [2024-10-30 17:18:13.459728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.793 [2024-10-30 17:18:13.547754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.392 17:18:14 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:31.392 17:18:14 ftl -- common/autotest_common.sh@866 -- # return 0 00:14:31.392 17:18:14 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:31.392 17:18:14 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:32.336 17:18:14 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:32.336 17:18:14 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:32.597 17:18:15 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:32.597 17:18:15 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:32.597 17:18:15 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:32.858 17:18:15 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:14:32.858 17:18:15 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:32.858 17:18:15 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:14:32.858 17:18:15 ftl -- ftl/ftl.sh@50 -- # break 00:14:32.858 17:18:15 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:14:32.858 17:18:15 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:32.858 17:18:15 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:32.858 17:18:15 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:32.858 17:18:15 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:14:32.858 17:18:15 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:32.858 17:18:15 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:14:32.858 17:18:15 ftl -- ftl/ftl.sh@63 -- # break 00:14:32.858 17:18:15 ftl -- ftl/ftl.sh@66 -- # killprocess 72118 00:14:32.858 17:18:15 ftl -- common/autotest_common.sh@952 -- # '[' -z 72118 ']' 00:14:32.858 17:18:15 ftl -- common/autotest_common.sh@956 -- # kill -0 72118 00:14:32.858 17:18:15 ftl -- common/autotest_common.sh@957 -- # uname 00:14:32.858 17:18:15 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:32.858 17:18:15 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72118 00:14:32.858 17:18:15 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:32.858 17:18:15 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:32.858 killing process with pid 72118 00:14:32.858 17:18:15 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72118' 00:14:32.858 17:18:15 ftl -- common/autotest_common.sh@971 -- # kill 72118 00:14:32.858 17:18:15 ftl -- common/autotest_common.sh@976 -- # wait 72118 00:14:34.294 17:18:16 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:14:34.294 17:18:16 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:14:34.294 17:18:16 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:34.294 17:18:16 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:34.294 17:18:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:14:34.294 ************************************ 00:14:34.294 START TEST ftl_fio_basic 00:14:34.294 ************************************ 00:14:34.294 17:18:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:14:34.294 * Looking for test storage... 00:14:34.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:14:34.294 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:34.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.295 --rc genhtml_branch_coverage=1 00:14:34.295 --rc genhtml_function_coverage=1 00:14:34.295 --rc genhtml_legend=1 00:14:34.295 --rc geninfo_all_blocks=1 00:14:34.295 --rc geninfo_unexecuted_blocks=1 00:14:34.295 00:14:34.295 ' 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:34.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.295 --rc genhtml_branch_coverage=1 00:14:34.295 --rc genhtml_function_coverage=1 00:14:34.295 --rc genhtml_legend=1 00:14:34.295 --rc geninfo_all_blocks=1 00:14:34.295 --rc geninfo_unexecuted_blocks=1 00:14:34.295 00:14:34.295 ' 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:34.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.295 --rc genhtml_branch_coverage=1 00:14:34.295 --rc genhtml_function_coverage=1 00:14:34.295 --rc genhtml_legend=1 00:14:34.295 --rc geninfo_all_blocks=1 00:14:34.295 --rc geninfo_unexecuted_blocks=1 00:14:34.295 00:14:34.295 ' 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:34.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.295 --rc genhtml_branch_coverage=1 00:14:34.295 --rc genhtml_function_coverage=1 00:14:34.295 --rc genhtml_legend=1 00:14:34.295 --rc geninfo_all_blocks=1 00:14:34.295 --rc geninfo_unexecuted_blocks=1 00:14:34.295 00:14:34.295 ' 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72250 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72250 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # '[' -z 72250 ']' 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:34.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:34.295 17:18:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:34.295 [2024-10-30 17:18:17.228574] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:14:34.295 [2024-10-30 17:18:17.228720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72250 ] 00:14:34.555 [2024-10-30 17:18:17.387849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:34.555 [2024-10-30 17:18:17.478532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.555 [2024-10-30 17:18:17.478805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.555 [2024-10-30 17:18:17.478830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.125 17:18:18 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:35.125 17:18:18 ftl.ftl_fio_basic -- common/autotest_common.sh@866 -- # return 0 00:14:35.125 17:18:18 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:14:35.125 17:18:18 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:14:35.125 17:18:18 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:14:35.125 17:18:18 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:14:35.125 17:18:18 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:14:35.125 17:18:18 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:14:35.386 17:18:18 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:35.386 17:18:18 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:14:35.386 17:18:18 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:35.386 17:18:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:14:35.386 17:18:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:14:35.386 17:18:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:14:35.386 17:18:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:14:35.386 17:18:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:35.647 17:18:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:14:35.647 { 00:14:35.647 "name": "nvme0n1", 00:14:35.647 "aliases": [ 00:14:35.648 "081b52b9-00e3-49f2-a285-3605a8a39a7b" 00:14:35.648 ], 00:14:35.648 "product_name": "NVMe disk", 00:14:35.648 "block_size": 4096, 00:14:35.648 "num_blocks": 1310720, 00:14:35.648 "uuid": "081b52b9-00e3-49f2-a285-3605a8a39a7b", 00:14:35.648 "numa_id": -1, 00:14:35.648 "assigned_rate_limits": { 00:14:35.648 "rw_ios_per_sec": 0, 00:14:35.648 "rw_mbytes_per_sec": 0, 00:14:35.648 "r_mbytes_per_sec": 0, 00:14:35.648 "w_mbytes_per_sec": 0 00:14:35.648 }, 00:14:35.648 "claimed": false, 00:14:35.648 "zoned": false, 00:14:35.648 "supported_io_types": { 00:14:35.648 "read": true, 00:14:35.648 "write": true, 00:14:35.648 "unmap": true, 00:14:35.648 "flush": true, 00:14:35.648 "reset": true, 00:14:35.648 "nvme_admin": true, 00:14:35.648 "nvme_io": true, 00:14:35.648 "nvme_io_md": false, 00:14:35.648 "write_zeroes": true, 00:14:35.648 "zcopy": false, 00:14:35.648 "get_zone_info": false, 00:14:35.648 "zone_management": false, 00:14:35.648 "zone_append": false, 00:14:35.648 "compare": true, 00:14:35.648 "compare_and_write": false, 00:14:35.648 "abort": true, 00:14:35.648 "seek_hole": false, 00:14:35.648 "seek_data": false, 00:14:35.648 "copy": true, 00:14:35.648 "nvme_iov_md": false 00:14:35.648 }, 00:14:35.648 "driver_specific": { 00:14:35.648 "nvme": [ 00:14:35.648 { 00:14:35.648 "pci_address": "0000:00:11.0", 00:14:35.648 "trid": { 00:14:35.648 "trtype": "PCIe", 00:14:35.648 "traddr": "0000:00:11.0" 00:14:35.648 }, 00:14:35.648 "ctrlr_data": { 00:14:35.648 "cntlid": 0, 00:14:35.648 "vendor_id": "0x1b36", 00:14:35.648 "model_number": "QEMU NVMe Ctrl", 00:14:35.648 "serial_number": "12341", 00:14:35.648 "firmware_revision": "8.0.0", 00:14:35.648 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:35.648 "oacs": { 00:14:35.648 "security": 0, 00:14:35.648 "format": 1, 00:14:35.648 "firmware": 0, 00:14:35.648 "ns_manage": 1 00:14:35.648 }, 00:14:35.648 "multi_ctrlr": false, 00:14:35.648 "ana_reporting": false 00:14:35.648 }, 00:14:35.648 "vs": { 00:14:35.648 "nvme_version": "1.4" 00:14:35.648 }, 00:14:35.648 "ns_data": { 00:14:35.648 "id": 1, 00:14:35.648 "can_share": false 00:14:35.648 } 00:14:35.648 } 00:14:35.648 ], 00:14:35.648 "mp_policy": "active_passive" 00:14:35.648 } 00:14:35.648 } 00:14:35.648 ]' 00:14:35.648 17:18:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:14:35.648 17:18:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:14:35.648 17:18:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:14:35.648 17:18:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=1310720 00:14:35.648 17:18:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:14:35.648 17:18:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 5120 00:14:35.648 17:18:18 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:14:35.648 17:18:18 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:35.648 17:18:18 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:14:35.648 17:18:18 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:35.648 17:18:18 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:35.909 17:18:18 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:14:35.909 17:18:18 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:36.170 17:18:18 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=7558edf6-2d0b-4864-bdcc-c1731f4338ff 00:14:36.170 17:18:18 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7558edf6-2d0b-4864-bdcc-c1731f4338ff 00:14:36.431 17:18:19 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=a934e5f3-13fd-4933-90bb-44ae46261efc 00:14:36.431 17:18:19 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a934e5f3-13fd-4933-90bb-44ae46261efc 00:14:36.431 17:18:19 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:14:36.431 17:18:19 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:14:36.431 17:18:19 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=a934e5f3-13fd-4933-90bb-44ae46261efc 00:14:36.431 17:18:19 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:14:36.431 17:18:19 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size a934e5f3-13fd-4933-90bb-44ae46261efc 00:14:36.431 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=a934e5f3-13fd-4933-90bb-44ae46261efc 00:14:36.431 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:14:36.431 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:14:36.431 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:14:36.431 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a934e5f3-13fd-4933-90bb-44ae46261efc 00:14:36.431 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:14:36.431 { 00:14:36.431 "name": "a934e5f3-13fd-4933-90bb-44ae46261efc", 00:14:36.431 "aliases": [ 00:14:36.431 "lvs/nvme0n1p0" 00:14:36.431 ], 00:14:36.431 "product_name": "Logical Volume", 00:14:36.431 "block_size": 4096, 00:14:36.432 "num_blocks": 26476544, 00:14:36.432 "uuid": "a934e5f3-13fd-4933-90bb-44ae46261efc", 00:14:36.432 "assigned_rate_limits": { 00:14:36.432 "rw_ios_per_sec": 0, 00:14:36.432 "rw_mbytes_per_sec": 0, 00:14:36.432 "r_mbytes_per_sec": 0, 00:14:36.432 "w_mbytes_per_sec": 0 00:14:36.432 }, 00:14:36.432 "claimed": false, 00:14:36.432 "zoned": false, 00:14:36.432 "supported_io_types": { 00:14:36.432 "read": true, 00:14:36.432 "write": true, 00:14:36.432 "unmap": true, 00:14:36.432 "flush": false, 00:14:36.432 "reset": true, 00:14:36.432 "nvme_admin": false, 00:14:36.432 "nvme_io": false, 00:14:36.432 "nvme_io_md": false, 00:14:36.432 "write_zeroes": true, 00:14:36.432 "zcopy": false, 00:14:36.432 "get_zone_info": false, 00:14:36.432 "zone_management": false, 00:14:36.432 "zone_append": false, 00:14:36.432 "compare": false, 00:14:36.432 "compare_and_write": false, 00:14:36.432 "abort": false, 00:14:36.432 "seek_hole": true, 00:14:36.432 "seek_data": true, 00:14:36.432 "copy": false, 00:14:36.432 "nvme_iov_md": false 00:14:36.432 }, 00:14:36.432 "driver_specific": { 00:14:36.432 "lvol": { 00:14:36.432 "lvol_store_uuid": "7558edf6-2d0b-4864-bdcc-c1731f4338ff", 00:14:36.432 "base_bdev": "nvme0n1", 00:14:36.432 "thin_provision": true, 00:14:36.432 "num_allocated_clusters": 0, 00:14:36.432 "snapshot": false, 00:14:36.432 "clone": false, 00:14:36.432 "esnap_clone": false 00:14:36.432 } 00:14:36.432 } 00:14:36.432 } 00:14:36.432 ]' 00:14:36.432 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:14:36.432 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:14:36.432 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:14:36.693 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:14:36.693 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:14:36.693 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:14:36.693 17:18:19 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:14:36.693 17:18:19 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:14:36.693 17:18:19 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:14:36.953 17:18:19 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:36.953 17:18:19 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:36.953 17:18:19 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size a934e5f3-13fd-4933-90bb-44ae46261efc 00:14:36.953 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=a934e5f3-13fd-4933-90bb-44ae46261efc 00:14:36.953 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:14:36.953 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:14:36.953 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:14:36.953 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a934e5f3-13fd-4933-90bb-44ae46261efc 00:14:36.953 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:14:36.953 { 00:14:36.953 "name": "a934e5f3-13fd-4933-90bb-44ae46261efc", 00:14:36.953 "aliases": [ 00:14:36.953 "lvs/nvme0n1p0" 00:14:36.953 ], 00:14:36.953 "product_name": "Logical Volume", 00:14:36.953 "block_size": 4096, 00:14:36.953 "num_blocks": 26476544, 00:14:36.953 "uuid": "a934e5f3-13fd-4933-90bb-44ae46261efc", 00:14:36.953 "assigned_rate_limits": { 00:14:36.953 "rw_ios_per_sec": 0, 00:14:36.953 "rw_mbytes_per_sec": 0, 00:14:36.953 "r_mbytes_per_sec": 0, 00:14:36.953 "w_mbytes_per_sec": 0 00:14:36.953 }, 00:14:36.953 "claimed": false, 00:14:36.953 "zoned": false, 00:14:36.953 "supported_io_types": { 00:14:36.954 "read": true, 00:14:36.954 "write": true, 00:14:36.954 "unmap": true, 00:14:36.954 "flush": false, 00:14:36.954 "reset": true, 00:14:36.954 "nvme_admin": false, 00:14:36.954 "nvme_io": false, 00:14:36.954 "nvme_io_md": false, 00:14:36.954 "write_zeroes": true, 00:14:36.954 "zcopy": false, 00:14:36.954 "get_zone_info": false, 00:14:36.954 "zone_management": false, 00:14:36.954 "zone_append": false, 00:14:36.954 "compare": false, 00:14:36.954 "compare_and_write": false, 00:14:36.954 "abort": false, 00:14:36.954 "seek_hole": true, 00:14:36.954 "seek_data": true, 00:14:36.954 "copy": false, 00:14:36.954 "nvme_iov_md": false 00:14:36.954 }, 00:14:36.954 "driver_specific": { 00:14:36.954 "lvol": { 00:14:36.954 "lvol_store_uuid": "7558edf6-2d0b-4864-bdcc-c1731f4338ff", 00:14:36.954 "base_bdev": "nvme0n1", 00:14:36.954 "thin_provision": true, 00:14:36.954 "num_allocated_clusters": 0, 00:14:36.954 "snapshot": false, 00:14:36.954 "clone": false, 00:14:36.954 "esnap_clone": false 00:14:36.954 } 00:14:36.954 } 00:14:36.954 } 00:14:36.954 ]' 00:14:36.954 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:14:36.954 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:14:36.954 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:14:37.214 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:14:37.214 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:14:37.214 17:18:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:14:37.214 17:18:19 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:14:37.214 17:18:19 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:37.214 17:18:20 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:14:37.214 17:18:20 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:14:37.214 17:18:20 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:14:37.214 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:14:37.214 17:18:20 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size a934e5f3-13fd-4933-90bb-44ae46261efc 00:14:37.214 17:18:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=a934e5f3-13fd-4933-90bb-44ae46261efc 00:14:37.214 17:18:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:14:37.214 17:18:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:14:37.214 17:18:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:14:37.214 17:18:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a934e5f3-13fd-4933-90bb-44ae46261efc 00:14:37.474 17:18:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:14:37.474 { 00:14:37.474 "name": "a934e5f3-13fd-4933-90bb-44ae46261efc", 00:14:37.474 "aliases": [ 00:14:37.474 "lvs/nvme0n1p0" 00:14:37.474 ], 00:14:37.474 "product_name": "Logical Volume", 00:14:37.474 "block_size": 4096, 00:14:37.474 "num_blocks": 26476544, 00:14:37.474 "uuid": "a934e5f3-13fd-4933-90bb-44ae46261efc", 00:14:37.474 "assigned_rate_limits": { 00:14:37.474 "rw_ios_per_sec": 0, 00:14:37.474 "rw_mbytes_per_sec": 0, 00:14:37.474 "r_mbytes_per_sec": 0, 00:14:37.474 "w_mbytes_per_sec": 0 00:14:37.474 }, 00:14:37.474 "claimed": false, 00:14:37.474 "zoned": false, 00:14:37.474 "supported_io_types": { 00:14:37.474 "read": true, 00:14:37.474 "write": true, 00:14:37.474 "unmap": true, 00:14:37.474 "flush": false, 00:14:37.474 "reset": true, 00:14:37.474 "nvme_admin": false, 00:14:37.475 "nvme_io": false, 00:14:37.475 "nvme_io_md": false, 00:14:37.475 "write_zeroes": true, 00:14:37.475 "zcopy": false, 00:14:37.475 "get_zone_info": false, 00:14:37.475 "zone_management": false, 00:14:37.475 "zone_append": false, 00:14:37.475 "compare": false, 00:14:37.475 "compare_and_write": false, 00:14:37.475 "abort": false, 00:14:37.475 "seek_hole": true, 00:14:37.475 "seek_data": true, 00:14:37.475 "copy": false, 00:14:37.475 "nvme_iov_md": false 00:14:37.475 }, 00:14:37.475 "driver_specific": { 00:14:37.475 "lvol": { 00:14:37.475 "lvol_store_uuid": "7558edf6-2d0b-4864-bdcc-c1731f4338ff", 00:14:37.475 "base_bdev": "nvme0n1", 00:14:37.475 "thin_provision": true, 00:14:37.475 "num_allocated_clusters": 0, 00:14:37.475 "snapshot": false, 00:14:37.475 "clone": false, 00:14:37.475 "esnap_clone": false 00:14:37.475 } 00:14:37.475 } 00:14:37.475 } 00:14:37.475 ]' 00:14:37.475 17:18:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:14:37.475 17:18:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:14:37.475 17:18:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:14:37.475 17:18:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:14:37.475 17:18:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:14:37.475 17:18:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:14:37.475 17:18:20 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:14:37.475 17:18:20 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:14:37.475 17:18:20 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a934e5f3-13fd-4933-90bb-44ae46261efc -c nvc0n1p0 --l2p_dram_limit 60 00:14:37.736 [2024-10-30 17:18:20.605836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.736 [2024-10-30 17:18:20.605873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:37.736 [2024-10-30 17:18:20.605885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:37.736 [2024-10-30 17:18:20.605892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.736 [2024-10-30 17:18:20.605946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.736 [2024-10-30 17:18:20.605954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:37.737 [2024-10-30 17:18:20.605961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:14:37.737 [2024-10-30 17:18:20.605969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.737 [2024-10-30 17:18:20.605999] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:37.737 [2024-10-30 17:18:20.606587] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:37.737 [2024-10-30 17:18:20.606609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.737 [2024-10-30 17:18:20.606615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:37.737 [2024-10-30 17:18:20.606623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:14:37.737 [2024-10-30 17:18:20.606629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.737 [2024-10-30 17:18:20.606658] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b9441887-b3dc-441d-b882-b5f14edafd29 00:14:37.737 [2024-10-30 17:18:20.607637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.737 [2024-10-30 17:18:20.607658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:37.737 [2024-10-30 17:18:20.607667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:14:37.737 [2024-10-30 17:18:20.607674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.737 [2024-10-30 17:18:20.612451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.737 [2024-10-30 17:18:20.612480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:37.737 [2024-10-30 17:18:20.612488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.700 ms 00:14:37.737 [2024-10-30 17:18:20.612496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.737 [2024-10-30 17:18:20.612575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.737 [2024-10-30 17:18:20.612585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:37.737 [2024-10-30 17:18:20.612592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:14:37.737 [2024-10-30 17:18:20.612602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.737 [2024-10-30 17:18:20.612642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.737 [2024-10-30 17:18:20.612651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:37.737 [2024-10-30 17:18:20.612657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:14:37.737 [2024-10-30 17:18:20.612665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.737 [2024-10-30 17:18:20.612689] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:37.737 [2024-10-30 17:18:20.615633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.737 [2024-10-30 17:18:20.615745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:37.737 [2024-10-30 17:18:20.615761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.947 ms 00:14:37.737 [2024-10-30 17:18:20.615767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.737 [2024-10-30 17:18:20.615797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.737 [2024-10-30 17:18:20.615806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:37.737 [2024-10-30 17:18:20.615814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:14:37.737 [2024-10-30 17:18:20.615819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.737 [2024-10-30 17:18:20.615845] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:37.737 [2024-10-30 17:18:20.615962] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:14:37.737 [2024-10-30 17:18:20.615976] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:37.737 [2024-10-30 17:18:20.615984] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:14:37.737 [2024-10-30 17:18:20.615993] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:37.737 [2024-10-30 17:18:20.616001] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:37.737 [2024-10-30 17:18:20.616009] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:37.737 [2024-10-30 17:18:20.616015] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:37.737 [2024-10-30 17:18:20.616022] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:14:37.737 [2024-10-30 17:18:20.616027] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:14:37.737 [2024-10-30 17:18:20.616035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.737 [2024-10-30 17:18:20.616040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:37.737 [2024-10-30 17:18:20.616051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:14:37.737 [2024-10-30 17:18:20.616056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.737 [2024-10-30 17:18:20.616131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.737 [2024-10-30 17:18:20.616137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:37.737 [2024-10-30 17:18:20.616145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:14:37.737 [2024-10-30 17:18:20.616151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.737 [2024-10-30 17:18:20.616254] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:37.737 [2024-10-30 17:18:20.616264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:37.737 [2024-10-30 17:18:20.616272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:37.737 [2024-10-30 17:18:20.616279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:37.737 [2024-10-30 17:18:20.616288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:37.737 [2024-10-30 17:18:20.616293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:37.737 [2024-10-30 17:18:20.616300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:37.737 [2024-10-30 17:18:20.616305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:37.737 [2024-10-30 17:18:20.616312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:37.737 [2024-10-30 17:18:20.616317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:37.737 [2024-10-30 17:18:20.616324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:37.737 [2024-10-30 17:18:20.616329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:37.737 [2024-10-30 17:18:20.616336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:37.737 [2024-10-30 17:18:20.616345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:37.737 [2024-10-30 17:18:20.616352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:14:37.737 [2024-10-30 17:18:20.616357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:37.737 [2024-10-30 17:18:20.616366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:37.737 [2024-10-30 17:18:20.616371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:14:37.737 [2024-10-30 17:18:20.616377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:37.737 [2024-10-30 17:18:20.616383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:37.737 [2024-10-30 17:18:20.616389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:37.737 [2024-10-30 17:18:20.616394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:37.737 [2024-10-30 17:18:20.616400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:37.737 [2024-10-30 17:18:20.616406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:37.737 [2024-10-30 17:18:20.616412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:37.737 [2024-10-30 17:18:20.616417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:37.737 [2024-10-30 17:18:20.616424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:37.737 [2024-10-30 17:18:20.616429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:37.737 [2024-10-30 17:18:20.616435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:37.737 [2024-10-30 17:18:20.616441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:14:37.737 [2024-10-30 17:18:20.616447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:37.737 [2024-10-30 17:18:20.616452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:37.737 [2024-10-30 17:18:20.616461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:14:37.737 [2024-10-30 17:18:20.616466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:37.737 [2024-10-30 17:18:20.616472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:37.737 [2024-10-30 17:18:20.616487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:14:37.737 [2024-10-30 17:18:20.616493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:37.737 [2024-10-30 17:18:20.616498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:14:37.737 [2024-10-30 17:18:20.616504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:14:37.737 [2024-10-30 17:18:20.616509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:37.737 [2024-10-30 17:18:20.616515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:14:37.737 [2024-10-30 17:18:20.616521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:14:37.737 [2024-10-30 17:18:20.616528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:37.737 [2024-10-30 17:18:20.616533] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:37.737 [2024-10-30 17:18:20.616540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:37.737 [2024-10-30 17:18:20.616550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:37.737 [2024-10-30 17:18:20.616558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:37.737 [2024-10-30 17:18:20.616564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:37.737 [2024-10-30 17:18:20.616572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:37.737 [2024-10-30 17:18:20.616577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:37.737 [2024-10-30 17:18:20.616584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:37.737 [2024-10-30 17:18:20.616590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:37.737 [2024-10-30 17:18:20.616597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:37.737 [2024-10-30 17:18:20.616605] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:37.737 [2024-10-30 17:18:20.616613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:37.737 [2024-10-30 17:18:20.616620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:37.737 [2024-10-30 17:18:20.616627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:14:37.737 [2024-10-30 17:18:20.616633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:14:37.738 [2024-10-30 17:18:20.616639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:14:37.738 [2024-10-30 17:18:20.616645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:14:37.738 [2024-10-30 17:18:20.616652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:14:37.738 [2024-10-30 17:18:20.616657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:14:37.738 [2024-10-30 17:18:20.616664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:14:37.738 [2024-10-30 17:18:20.616670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:14:37.738 [2024-10-30 17:18:20.616678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:14:37.738 [2024-10-30 17:18:20.616684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:14:37.738 [2024-10-30 17:18:20.616691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:14:37.738 [2024-10-30 17:18:20.616697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:14:37.738 [2024-10-30 17:18:20.616704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:14:37.738 [2024-10-30 17:18:20.616710] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:37.738 [2024-10-30 17:18:20.616717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:37.738 [2024-10-30 17:18:20.616723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:37.738 [2024-10-30 17:18:20.616730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:37.738 [2024-10-30 17:18:20.616736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:37.738 [2024-10-30 17:18:20.616743] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:37.738 [2024-10-30 17:18:20.616749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.738 [2024-10-30 17:18:20.616756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:37.738 [2024-10-30 17:18:20.616766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:14:37.738 [2024-10-30 17:18:20.616773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.738 [2024-10-30 17:18:20.616831] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:14:37.738 [2024-10-30 17:18:20.616842] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:14:43.021 [2024-10-30 17:18:25.127854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.021 [2024-10-30 17:18:25.127912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:43.021 [2024-10-30 17:18:25.127926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4511.003 ms 00:14:43.021 [2024-10-30 17:18:25.127939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.021 [2024-10-30 17:18:25.152787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.021 [2024-10-30 17:18:25.152835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:43.021 [2024-10-30 17:18:25.152847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.649 ms 00:14:43.021 [2024-10-30 17:18:25.152857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.021 [2024-10-30 17:18:25.152978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.022 [2024-10-30 17:18:25.152991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:43.022 [2024-10-30 17:18:25.152999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:14:43.022 [2024-10-30 17:18:25.153010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.022 [2024-10-30 17:18:25.194750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.022 [2024-10-30 17:18:25.194793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:43.022 [2024-10-30 17:18:25.194805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.699 ms 00:14:43.022 [2024-10-30 17:18:25.194818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.022 [2024-10-30 17:18:25.194856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.022 [2024-10-30 17:18:25.194866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:43.022 [2024-10-30 17:18:25.194875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:43.022 [2024-10-30 17:18:25.194883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.022 [2024-10-30 17:18:25.195258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.022 [2024-10-30 17:18:25.195278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:43.022 [2024-10-30 17:18:25.195287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:14:43.022 [2024-10-30 17:18:25.195296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.022 [2024-10-30 17:18:25.195417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.022 [2024-10-30 17:18:25.195428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:43.022 [2024-10-30 17:18:25.195436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:14:43.022 [2024-10-30 17:18:25.195447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.022 [2024-10-30 17:18:25.211683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.022 [2024-10-30 17:18:25.211818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:43.022 [2024-10-30 17:18:25.211833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.209 ms 00:14:43.022 [2024-10-30 17:18:25.211843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.022 [2024-10-30 17:18:25.223087] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:43.022 [2024-10-30 17:18:25.236843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.022 [2024-10-30 17:18:25.236887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:43.022 [2024-10-30 17:18:25.236899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.912 ms 00:14:43.022 [2024-10-30 17:18:25.236907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.022 [2024-10-30 17:18:25.294438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.022 [2024-10-30 17:18:25.294591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:43.022 [2024-10-30 17:18:25.294611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.497 ms 00:14:43.022 [2024-10-30 17:18:25.294619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.022 [2024-10-30 17:18:25.294793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.022 [2024-10-30 17:18:25.294803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:43.022 [2024-10-30 17:18:25.294815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:14:43.022 [2024-10-30 17:18:25.294823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.022 [2024-10-30 17:18:25.317767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.022 [2024-10-30 17:18:25.317910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:43.022 [2024-10-30 17:18:25.317929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.896 ms 00:14:43.022 [2024-10-30 17:18:25.317940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.022 [2024-10-30 17:18:25.340771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.022 [2024-10-30 17:18:25.340882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:43.022 [2024-10-30 17:18:25.340901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.795 ms 00:14:43.022 [2024-10-30 17:18:25.340909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.022 [2024-10-30 17:18:25.341489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.022 [2024-10-30 17:18:25.341502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:43.022 [2024-10-30 17:18:25.341512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:14:43.022 [2024-10-30 17:18:25.341519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.022 [2024-10-30 17:18:25.410523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.022 [2024-10-30 17:18:25.410554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:43.022 [2024-10-30 17:18:25.410569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.962 ms 00:14:43.022 [2024-10-30 17:18:25.410577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.022 [2024-10-30 17:18:25.434409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.022 [2024-10-30 17:18:25.434440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:43.022 [2024-10-30 17:18:25.434453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.748 ms 00:14:43.022 [2024-10-30 17:18:25.434460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.022 [2024-10-30 17:18:25.457214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.022 [2024-10-30 17:18:25.457244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:14:43.022 [2024-10-30 17:18:25.457257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.712 ms 00:14:43.022 [2024-10-30 17:18:25.457264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.022 [2024-10-30 17:18:25.480297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.022 [2024-10-30 17:18:25.480331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:43.022 [2024-10-30 17:18:25.480345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.991 ms 00:14:43.022 [2024-10-30 17:18:25.480352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.022 [2024-10-30 17:18:25.480399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.022 [2024-10-30 17:18:25.480408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:43.022 [2024-10-30 17:18:25.480420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:14:43.022 [2024-10-30 17:18:25.480427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.022 [2024-10-30 17:18:25.480507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.022 [2024-10-30 17:18:25.480516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:43.022 [2024-10-30 17:18:25.480528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:14:43.022 [2024-10-30 17:18:25.480535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.022 [2024-10-30 17:18:25.481363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4875.096 ms, result 0 00:14:43.022 { 00:14:43.022 "name": "ftl0", 00:14:43.022 "uuid": "b9441887-b3dc-441d-b882-b5f14edafd29" 00:14:43.022 } 00:14:43.022 17:18:25 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:14:43.022 17:18:25 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:14:43.022 17:18:25 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:43.022 17:18:25 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local i 00:14:43.022 17:18:25 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:43.022 17:18:25 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:43.022 17:18:25 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:43.022 17:18:25 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:14:43.022 [ 00:14:43.022 { 00:14:43.022 "name": "ftl0", 00:14:43.022 "aliases": [ 00:14:43.022 "b9441887-b3dc-441d-b882-b5f14edafd29" 00:14:43.022 ], 00:14:43.022 "product_name": "FTL disk", 00:14:43.022 "block_size": 4096, 00:14:43.022 "num_blocks": 20971520, 00:14:43.022 "uuid": "b9441887-b3dc-441d-b882-b5f14edafd29", 00:14:43.022 "assigned_rate_limits": { 00:14:43.022 "rw_ios_per_sec": 0, 00:14:43.022 "rw_mbytes_per_sec": 0, 00:14:43.022 "r_mbytes_per_sec": 0, 00:14:43.022 "w_mbytes_per_sec": 0 00:14:43.022 }, 00:14:43.022 "claimed": false, 00:14:43.022 "zoned": false, 00:14:43.022 "supported_io_types": { 00:14:43.022 "read": true, 00:14:43.022 "write": true, 00:14:43.022 "unmap": true, 00:14:43.022 "flush": true, 00:14:43.022 "reset": false, 00:14:43.022 "nvme_admin": false, 00:14:43.022 "nvme_io": false, 00:14:43.022 "nvme_io_md": false, 00:14:43.022 "write_zeroes": true, 00:14:43.022 "zcopy": false, 00:14:43.022 "get_zone_info": false, 00:14:43.022 "zone_management": false, 00:14:43.022 "zone_append": false, 00:14:43.022 "compare": false, 00:14:43.022 "compare_and_write": false, 00:14:43.022 "abort": false, 00:14:43.022 "seek_hole": false, 00:14:43.022 "seek_data": false, 00:14:43.022 "copy": false, 00:14:43.022 "nvme_iov_md": false 00:14:43.022 }, 00:14:43.022 "driver_specific": { 00:14:43.022 "ftl": { 00:14:43.022 "base_bdev": "a934e5f3-13fd-4933-90bb-44ae46261efc", 00:14:43.022 "cache": "nvc0n1p0" 00:14:43.022 } 00:14:43.022 } 00:14:43.022 } 00:14:43.022 ] 00:14:43.022 17:18:25 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # return 0 00:14:43.022 17:18:25 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:14:43.022 17:18:25 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:14:43.281 17:18:26 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:14:43.281 17:18:26 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:14:43.281 [2024-10-30 17:18:26.258150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.281 [2024-10-30 17:18:26.258194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:43.281 [2024-10-30 17:18:26.258222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:43.281 [2024-10-30 17:18:26.258232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.281 [2024-10-30 17:18:26.258265] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:43.281 [2024-10-30 17:18:26.260845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.281 [2024-10-30 17:18:26.260876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:43.281 [2024-10-30 17:18:26.260888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.562 ms 00:14:43.281 [2024-10-30 17:18:26.260896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.281 [2024-10-30 17:18:26.261290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.281 [2024-10-30 17:18:26.261305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:43.281 [2024-10-30 17:18:26.261316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:14:43.281 [2024-10-30 17:18:26.261323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.541 [2024-10-30 17:18:26.264562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.541 [2024-10-30 17:18:26.264584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:43.541 [2024-10-30 17:18:26.264597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.218 ms 00:14:43.541 [2024-10-30 17:18:26.264605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.541 [2024-10-30 17:18:26.270805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.541 [2024-10-30 17:18:26.270830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:14:43.541 [2024-10-30 17:18:26.270841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.175 ms 00:14:43.541 [2024-10-30 17:18:26.270848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.541 [2024-10-30 17:18:26.294206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.541 [2024-10-30 17:18:26.294247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:43.541 [2024-10-30 17:18:26.294260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.275 ms 00:14:43.541 [2024-10-30 17:18:26.294267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.541 [2024-10-30 17:18:26.308488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.541 [2024-10-30 17:18:26.308519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:43.541 [2024-10-30 17:18:26.308533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.166 ms 00:14:43.541 [2024-10-30 17:18:26.308541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.541 [2024-10-30 17:18:26.308721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.541 [2024-10-30 17:18:26.308731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:43.541 [2024-10-30 17:18:26.308741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:14:43.541 [2024-10-30 17:18:26.308749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.541 [2024-10-30 17:18:26.331756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.541 [2024-10-30 17:18:26.331875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:14:43.541 [2024-10-30 17:18:26.331895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.984 ms 00:14:43.541 [2024-10-30 17:18:26.331902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.541 [2024-10-30 17:18:26.354606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.541 [2024-10-30 17:18:26.354715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:14:43.541 [2024-10-30 17:18:26.354733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.663 ms 00:14:43.541 [2024-10-30 17:18:26.354740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.541 [2024-10-30 17:18:26.377033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.541 [2024-10-30 17:18:26.377063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:43.541 [2024-10-30 17:18:26.377075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.255 ms 00:14:43.541 [2024-10-30 17:18:26.377082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.541 [2024-10-30 17:18:26.399345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.541 [2024-10-30 17:18:26.399381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:43.541 [2024-10-30 17:18:26.399393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.181 ms 00:14:43.541 [2024-10-30 17:18:26.399400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.541 [2024-10-30 17:18:26.399441] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:43.541 [2024-10-30 17:18:26.399455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:43.541 [2024-10-30 17:18:26.399466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:43.541 [2024-10-30 17:18:26.399474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.399994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:43.542 [2024-10-30 17:18:26.400275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:43.543 [2024-10-30 17:18:26.400285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:43.543 [2024-10-30 17:18:26.400292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:43.543 [2024-10-30 17:18:26.400302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:43.543 [2024-10-30 17:18:26.400318] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:43.543 [2024-10-30 17:18:26.400348] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b9441887-b3dc-441d-b882-b5f14edafd29 00:14:43.543 [2024-10-30 17:18:26.400356] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:43.543 [2024-10-30 17:18:26.400366] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:43.543 [2024-10-30 17:18:26.400373] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:43.543 [2024-10-30 17:18:26.400382] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:43.543 [2024-10-30 17:18:26.400391] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:43.543 [2024-10-30 17:18:26.400402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:43.543 [2024-10-30 17:18:26.400410] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:43.543 [2024-10-30 17:18:26.400418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:43.543 [2024-10-30 17:18:26.400424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:43.543 [2024-10-30 17:18:26.400433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.543 [2024-10-30 17:18:26.400440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:43.543 [2024-10-30 17:18:26.400449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:14:43.543 [2024-10-30 17:18:26.400456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.543 [2024-10-30 17:18:26.412619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.543 [2024-10-30 17:18:26.412648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:43.543 [2024-10-30 17:18:26.412659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.113 ms 00:14:43.543 [2024-10-30 17:18:26.412669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.543 [2024-10-30 17:18:26.413022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:43.543 [2024-10-30 17:18:26.413040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:43.543 [2024-10-30 17:18:26.413050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:14:43.543 [2024-10-30 17:18:26.413057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.543 [2024-10-30 17:18:26.454401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:43.543 [2024-10-30 17:18:26.454429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:43.543 [2024-10-30 17:18:26.454441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:43.543 [2024-10-30 17:18:26.454447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.543 [2024-10-30 17:18:26.454499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:43.543 [2024-10-30 17:18:26.454505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:43.543 [2024-10-30 17:18:26.454512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:43.543 [2024-10-30 17:18:26.454517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.543 [2024-10-30 17:18:26.454583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:43.543 [2024-10-30 17:18:26.454591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:43.543 [2024-10-30 17:18:26.454599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:43.543 [2024-10-30 17:18:26.454606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.543 [2024-10-30 17:18:26.454627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:43.543 [2024-10-30 17:18:26.454633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:43.543 [2024-10-30 17:18:26.454640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:43.543 [2024-10-30 17:18:26.454646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.543 [2024-10-30 17:18:26.517961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:43.543 [2024-10-30 17:18:26.517997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:43.543 [2024-10-30 17:18:26.518008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:43.543 [2024-10-30 17:18:26.518016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.802 [2024-10-30 17:18:26.566481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:43.802 [2024-10-30 17:18:26.566629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:43.802 [2024-10-30 17:18:26.566645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:43.802 [2024-10-30 17:18:26.566651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.802 [2024-10-30 17:18:26.566729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:43.802 [2024-10-30 17:18:26.566736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:43.802 [2024-10-30 17:18:26.566744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:43.802 [2024-10-30 17:18:26.566750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.802 [2024-10-30 17:18:26.566801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:43.802 [2024-10-30 17:18:26.566808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:43.802 [2024-10-30 17:18:26.566815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:43.802 [2024-10-30 17:18:26.566821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.802 [2024-10-30 17:18:26.566907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:43.802 [2024-10-30 17:18:26.566914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:43.802 [2024-10-30 17:18:26.566922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:43.802 [2024-10-30 17:18:26.566927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.802 [2024-10-30 17:18:26.566963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:43.802 [2024-10-30 17:18:26.566972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:43.802 [2024-10-30 17:18:26.566981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:43.802 [2024-10-30 17:18:26.566986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.802 [2024-10-30 17:18:26.567021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:43.802 [2024-10-30 17:18:26.567027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:43.802 [2024-10-30 17:18:26.567035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:43.802 [2024-10-30 17:18:26.567041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.802 [2024-10-30 17:18:26.567084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:43.802 [2024-10-30 17:18:26.567091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:43.802 [2024-10-30 17:18:26.567098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:43.802 [2024-10-30 17:18:26.567104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:43.802 [2024-10-30 17:18:26.567243] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 309.064 ms, result 0 00:14:43.802 true 00:14:43.802 17:18:26 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72250 00:14:43.802 17:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # '[' -z 72250 ']' 00:14:43.802 17:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # kill -0 72250 00:14:43.802 17:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # uname 00:14:43.802 17:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:43.802 17:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72250 00:14:43.802 killing process with pid 72250 00:14:43.802 17:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:43.802 17:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:43.802 17:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72250' 00:14:43.802 17:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@971 -- # kill 72250 00:14:43.802 17:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@976 -- # wait 72250 00:14:51.941 17:18:33 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:14:51.941 17:18:33 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:51.941 17:18:33 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:14:51.941 17:18:33 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:51.941 17:18:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:51.941 17:18:33 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:51.941 17:18:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:51.941 17:18:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:14:51.941 17:18:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:51.941 17:18:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:14:51.942 17:18:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:51.942 17:18:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:14:51.942 17:18:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:14:51.942 17:18:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:14:51.942 17:18:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:51.942 17:18:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:14:51.942 17:18:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:14:51.942 17:18:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:51.942 17:18:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:51.942 17:18:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:14:51.942 17:18:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:51.942 17:18:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:51.942 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:14:51.942 fio-3.35 00:14:51.942 Starting 1 thread 00:14:56.130 00:14:56.130 test: (groupid=0, jobs=1): err= 0: pid=72460: Wed Oct 30 17:18:38 2024 00:14:56.130 read: IOPS=1087, BW=72.2MiB/s (75.8MB/s)(255MiB/3523msec) 00:14:56.130 slat (nsec): min=2928, max=17121, avg=4218.14, stdev=1867.95 00:14:56.130 clat (usec): min=262, max=1144, avg=411.00, stdev=118.13 00:14:56.130 lat (usec): min=266, max=1149, avg=415.22, stdev=118.69 00:14:56.130 clat percentiles (usec): 00:14:56.130 | 1.00th=[ 306], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 318], 00:14:56.130 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 347], 60.00th=[ 429], 00:14:56.130 | 70.00th=[ 498], 80.00th=[ 515], 90.00th=[ 529], 95.00th=[ 578], 00:14:56.130 | 99.00th=[ 857], 99.50th=[ 898], 99.90th=[ 1090], 99.95th=[ 1123], 00:14:56.130 | 99.99th=[ 1139] 00:14:56.130 write: IOPS=1095, BW=72.7MiB/s (76.3MB/s)(256MiB/3520msec); 0 zone resets 00:14:56.130 slat (nsec): min=13626, max=56977, avg=19161.31, stdev=4202.37 00:14:56.130 clat (usec): min=292, max=1984, avg=468.37, stdev=173.13 00:14:56.130 lat (usec): min=319, max=2011, avg=487.53, stdev=174.12 00:14:56.130 clat percentiles (usec): 00:14:56.130 | 1.00th=[ 330], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 347], 00:14:56.130 | 30.00th=[ 351], 40.00th=[ 396], 50.00th=[ 412], 60.00th=[ 469], 00:14:56.130 | 70.00th=[ 529], 80.00th=[ 586], 90.00th=[ 611], 95.00th=[ 676], 00:14:56.130 | 99.00th=[ 1319], 99.50th=[ 1582], 99.90th=[ 1860], 99.95th=[ 1926], 00:14:56.130 | 99.99th=[ 1991] 00:14:56.130 bw ( KiB/s): min=53992, max=93160, per=99.92%, avg=74430.86, stdev=17244.62, samples=7 00:14:56.130 iops : min= 794, max= 1370, avg=1094.57, stdev=253.60, samples=7 00:14:56.130 lat (usec) : 500=68.38%, 750=28.22%, 1000=2.64% 00:14:56.130 lat (msec) : 2=0.75% 00:14:56.130 cpu : usr=99.29%, sys=0.09%, ctx=4, majf=0, minf=1169 00:14:56.130 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:56.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.130 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.130 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:56.130 00:14:56.130 Run status group 0 (all jobs): 00:14:56.130 READ: bw=72.2MiB/s (75.8MB/s), 72.2MiB/s-72.2MiB/s (75.8MB/s-75.8MB/s), io=255MiB (267MB), run=3523-3523msec 00:14:56.130 WRITE: bw=72.7MiB/s (76.3MB/s), 72.7MiB/s-72.7MiB/s (76.3MB/s-76.3MB/s), io=256MiB (269MB), run=3520-3520msec 00:14:57.070 ----------------------------------------------------- 00:14:57.071 Suppressions used: 00:14:57.071 count bytes template 00:14:57.071 1 5 /usr/src/fio/parse.c 00:14:57.071 1 8 libtcmalloc_minimal.so 00:14:57.071 1 904 libcrypto.so 00:14:57.071 ----------------------------------------------------- 00:14:57.071 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:57.071 17:18:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:57.330 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:57.330 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:57.330 fio-3.35 00:14:57.330 Starting 2 threads 00:15:23.895 00:15:23.895 first_half: (groupid=0, jobs=1): err= 0: pid=72558: Wed Oct 30 17:19:03 2024 00:15:23.895 read: IOPS=2932, BW=11.5MiB/s (12.0MB/s)(255MiB/22245msec) 00:15:23.895 slat (usec): min=3, max=193, avg= 4.50, stdev= 1.32 00:15:23.895 clat (usec): min=647, max=305670, avg=34081.31, stdev=17161.34 00:15:23.895 lat (usec): min=652, max=305674, avg=34085.81, stdev=17161.46 00:15:23.895 clat percentiles (msec): 00:15:23.895 | 1.00th=[ 7], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 30], 00:15:23.895 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:15:23.895 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 46], 00:15:23.895 | 99.00th=[ 130], 99.50th=[ 146], 99.90th=[ 184], 99.95th=[ 257], 00:15:23.895 | 99.99th=[ 296] 00:15:23.895 write: IOPS=3804, BW=14.9MiB/s (15.6MB/s)(256MiB/17225msec); 0 zone resets 00:15:23.895 slat (usec): min=3, max=399, avg= 6.03, stdev= 3.55 00:15:23.895 clat (usec): min=362, max=77001, avg=9484.98, stdev=16033.78 00:15:23.895 lat (usec): min=369, max=77006, avg=9491.01, stdev=16033.71 00:15:23.895 clat percentiles (usec): 00:15:23.895 | 1.00th=[ 685], 5.00th=[ 807], 10.00th=[ 963], 20.00th=[ 1221], 00:15:23.895 | 30.00th=[ 2442], 40.00th=[ 3589], 50.00th=[ 4621], 60.00th=[ 5407], 00:15:23.895 | 70.00th=[ 6521], 80.00th=[10552], 90.00th=[13829], 95.00th=[61604], 00:15:23.895 | 99.00th=[69731], 99.50th=[71828], 99.90th=[74974], 99.95th=[74974], 00:15:23.895 | 99.99th=[76022] 00:15:23.895 bw ( KiB/s): min= 1792, max=40776, per=90.75%, avg=24966.10, stdev=12780.56, samples=21 00:15:23.895 iops : min= 448, max=10194, avg=6241.52, stdev=3195.14, samples=21 00:15:23.895 lat (usec) : 500=0.02%, 750=1.50%, 1000=4.33% 00:15:23.895 lat (msec) : 2=8.25%, 4=7.97%, 10=17.78%, 20=6.89%, 50=47.54% 00:15:23.895 lat (msec) : 100=4.70%, 250=1.00%, 500=0.03% 00:15:23.895 cpu : usr=99.25%, sys=0.13%, ctx=43, majf=0, minf=5599 00:15:23.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:23.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.895 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:23.895 issued rwts: total=65240,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:23.895 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:23.895 second_half: (groupid=0, jobs=1): err= 0: pid=72559: Wed Oct 30 17:19:03 2024 00:15:23.895 read: IOPS=2906, BW=11.4MiB/s (11.9MB/s)(255MiB/22446msec) 00:15:23.895 slat (nsec): min=2999, max=54568, avg=4463.45, stdev=1162.50 00:15:23.895 clat (usec): min=566, max=309084, avg=33834.29, stdev=18710.26 00:15:23.895 lat (usec): min=571, max=309091, avg=33838.76, stdev=18710.29 00:15:23.895 clat percentiles (msec): 00:15:23.895 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 30], 00:15:23.895 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:15:23.895 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 38], 95.00th=[ 47], 00:15:23.895 | 99.00th=[ 134], 99.50th=[ 163], 99.90th=[ 226], 99.95th=[ 255], 00:15:23.895 | 99.99th=[ 305] 00:15:23.895 write: IOPS=3438, BW=13.4MiB/s (14.1MB/s)(256MiB/19058msec); 0 zone resets 00:15:23.895 slat (usec): min=3, max=410, avg= 6.39, stdev= 3.43 00:15:23.895 clat (usec): min=352, max=77080, avg=10136.16, stdev=16689.77 00:15:23.895 lat (usec): min=362, max=77087, avg=10142.55, stdev=16690.05 00:15:23.895 clat percentiles (usec): 00:15:23.895 | 1.00th=[ 676], 5.00th=[ 766], 10.00th=[ 848], 20.00th=[ 1090], 00:15:23.895 | 30.00th=[ 1876], 40.00th=[ 3326], 50.00th=[ 4359], 60.00th=[ 5342], 00:15:23.895 | 70.00th=[ 6652], 80.00th=[11469], 90.00th=[28443], 95.00th=[62653], 00:15:23.895 | 99.00th=[69731], 99.50th=[72877], 99.90th=[74974], 99.95th=[76022], 00:15:23.895 | 99.99th=[77071] 00:15:23.895 bw ( KiB/s): min= 896, max=55344, per=79.42%, avg=21848.62, stdev=14875.20, samples=24 00:15:23.895 iops : min= 224, max=13836, avg=5462.12, stdev=3718.76, samples=24 00:15:23.895 lat (usec) : 500=0.02%, 750=2.09%, 1000=6.28% 00:15:23.895 lat (msec) : 2=7.10%, 4=7.81%, 10=15.88%, 20=6.85%, 50=47.98% 00:15:23.895 lat (msec) : 100=4.95%, 250=1.00%, 500=0.03% 00:15:23.895 cpu : usr=99.33%, sys=0.12%, ctx=30, majf=0, minf=5514 00:15:23.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:23.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.895 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:23.895 issued rwts: total=65250,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:23.895 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:23.895 00:15:23.895 Run status group 0 (all jobs): 00:15:23.895 READ: bw=22.7MiB/s (23.8MB/s), 11.4MiB/s-11.5MiB/s (11.9MB/s-12.0MB/s), io=510MiB (534MB), run=22245-22446msec 00:15:23.895 WRITE: bw=26.9MiB/s (28.2MB/s), 13.4MiB/s-14.9MiB/s (14.1MB/s-15.6MB/s), io=512MiB (537MB), run=17225-19058msec 00:15:23.895 ----------------------------------------------------- 00:15:23.895 Suppressions used: 00:15:23.895 count bytes template 00:15:23.895 2 10 /usr/src/fio/parse.c 00:15:23.895 2 192 /usr/src/fio/iolog.c 00:15:23.895 1 8 libtcmalloc_minimal.so 00:15:23.895 1 904 libcrypto.so 00:15:23.895 ----------------------------------------------------- 00:15:23.895 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:23.895 17:19:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:15:23.896 17:19:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:23.896 17:19:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:23.896 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:23.896 fio-3.35 00:15:23.896 Starting 1 thread 00:15:38.809 00:15:38.809 test: (groupid=0, jobs=1): err= 0: pid=72861: Wed Oct 30 17:19:21 2024 00:15:38.809 read: IOPS=6854, BW=26.8MiB/s (28.1MB/s)(255MiB/9512msec) 00:15:38.809 slat (usec): min=2, max=316, avg= 5.52, stdev= 2.50 00:15:38.809 clat (usec): min=477, max=34843, avg=18664.80, stdev=3568.11 00:15:38.809 lat (usec): min=483, max=34848, avg=18670.32, stdev=3568.78 00:15:38.809 clat percentiles (usec): 00:15:38.809 | 1.00th=[14091], 5.00th=[14353], 10.00th=[14484], 20.00th=[15139], 00:15:38.809 | 30.00th=[16188], 40.00th=[17433], 50.00th=[18482], 60.00th=[19006], 00:15:38.809 | 70.00th=[20055], 80.00th=[21365], 90.00th=[23462], 95.00th=[25297], 00:15:38.809 | 99.00th=[29230], 99.50th=[30278], 99.90th=[32375], 99.95th=[32900], 00:15:38.809 | 99.99th=[34341] 00:15:38.809 write: IOPS=13.4k, BW=52.2MiB/s (54.8MB/s)(256MiB/4902msec); 0 zone resets 00:15:38.809 slat (usec): min=4, max=123, avg= 5.35, stdev= 2.13 00:15:38.809 clat (usec): min=500, max=49715, avg=9526.69, stdev=10821.47 00:15:38.809 lat (usec): min=504, max=49720, avg=9532.04, stdev=10821.58 00:15:38.809 clat percentiles (usec): 00:15:38.809 | 1.00th=[ 701], 5.00th=[ 832], 10.00th=[ 922], 20.00th=[ 1057], 00:15:38.809 | 30.00th=[ 1205], 40.00th=[ 1696], 50.00th=[ 5866], 60.00th=[ 6915], 00:15:38.809 | 70.00th=[11076], 80.00th=[16188], 90.00th=[30278], 95.00th=[32113], 00:15:38.809 | 99.00th=[38536], 99.50th=[40109], 99.90th=[41681], 99.95th=[42206], 00:15:38.809 | 99.99th=[46924] 00:15:38.809 bw ( KiB/s): min=32280, max=83896, per=98.04%, avg=52428.80, stdev=14371.34, samples=10 00:15:38.809 iops : min= 8070, max=20974, avg=13107.20, stdev=3592.83, samples=10 00:15:38.809 lat (usec) : 500=0.01%, 750=1.04%, 1000=6.73% 00:15:38.809 lat (msec) : 2=12.66%, 4=0.69%, 10=13.45%, 20=41.70%, 50=23.73% 00:15:38.809 cpu : usr=98.95%, sys=0.19%, ctx=20, majf=0, minf=5566 00:15:38.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:38.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.809 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:38.809 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:38.810 00:15:38.810 Run status group 0 (all jobs): 00:15:38.810 READ: bw=26.8MiB/s (28.1MB/s), 26.8MiB/s-26.8MiB/s (28.1MB/s-28.1MB/s), io=255MiB (267MB), run=9512-9512msec 00:15:38.810 WRITE: bw=52.2MiB/s (54.8MB/s), 52.2MiB/s-52.2MiB/s (54.8MB/s-54.8MB/s), io=256MiB (268MB), run=4902-4902msec 00:15:40.726 ----------------------------------------------------- 00:15:40.726 Suppressions used: 00:15:40.726 count bytes template 00:15:40.726 1 5 /usr/src/fio/parse.c 00:15:40.726 2 192 /usr/src/fio/iolog.c 00:15:40.726 1 8 libtcmalloc_minimal.so 00:15:40.726 1 904 libcrypto.so 00:15:40.726 ----------------------------------------------------- 00:15:40.726 00:15:40.726 17:19:23 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:40.726 17:19:23 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:40.726 17:19:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:40.726 17:19:23 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:40.726 Remove shared memory files 00:15:40.726 17:19:23 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:15:40.726 17:19:23 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:40.726 17:19:23 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:15:40.726 17:19:23 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:15:40.726 17:19:23 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57066 /dev/shm/spdk_tgt_trace.pid71140 00:15:40.726 17:19:23 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:40.726 17:19:23 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:15:40.726 ************************************ 00:15:40.726 END TEST ftl_fio_basic 00:15:40.726 ************************************ 00:15:40.726 00:15:40.726 real 1m6.356s 00:15:40.726 user 2m21.219s 00:15:40.726 sys 0m2.670s 00:15:40.726 17:19:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:40.726 17:19:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:40.726 17:19:23 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:40.726 17:19:23 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:40.726 17:19:23 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:40.726 17:19:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:40.726 ************************************ 00:15:40.726 START TEST ftl_bdevperf 00:15:40.726 ************************************ 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:40.726 * Looking for test storage... 00:15:40.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:40.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.726 --rc genhtml_branch_coverage=1 00:15:40.726 --rc genhtml_function_coverage=1 00:15:40.726 --rc genhtml_legend=1 00:15:40.726 --rc geninfo_all_blocks=1 00:15:40.726 --rc geninfo_unexecuted_blocks=1 00:15:40.726 00:15:40.726 ' 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:40.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.726 --rc genhtml_branch_coverage=1 00:15:40.726 --rc genhtml_function_coverage=1 00:15:40.726 --rc genhtml_legend=1 00:15:40.726 --rc geninfo_all_blocks=1 00:15:40.726 --rc geninfo_unexecuted_blocks=1 00:15:40.726 00:15:40.726 ' 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:40.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.726 --rc genhtml_branch_coverage=1 00:15:40.726 --rc genhtml_function_coverage=1 00:15:40.726 --rc genhtml_legend=1 00:15:40.726 --rc geninfo_all_blocks=1 00:15:40.726 --rc geninfo_unexecuted_blocks=1 00:15:40.726 00:15:40.726 ' 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:40.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.726 --rc genhtml_branch_coverage=1 00:15:40.726 --rc genhtml_function_coverage=1 00:15:40.726 --rc genhtml_legend=1 00:15:40.726 --rc geninfo_all_blocks=1 00:15:40.726 --rc geninfo_unexecuted_blocks=1 00:15:40.726 00:15:40.726 ' 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73110 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73110 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 73110 ']' 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.726 17:19:23 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:40.727 17:19:23 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.727 17:19:23 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:40.727 17:19:23 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:40.727 [2024-10-30 17:19:23.637715] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:15:40.727 [2024-10-30 17:19:23.638093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73110 ] 00:15:40.986 [2024-10-30 17:19:23.797829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.986 [2024-10-30 17:19:23.898290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.557 17:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:41.557 17:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:15:41.557 17:19:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:41.557 17:19:24 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:15:41.557 17:19:24 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:41.557 17:19:24 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:15:41.557 17:19:24 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:15:41.557 17:19:24 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:41.818 17:19:24 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:41.818 17:19:24 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:15:41.818 17:19:24 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:41.818 17:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:15:41.818 17:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:41.818 17:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:15:41.818 17:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:15:41.818 17:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:42.079 17:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:42.080 { 00:15:42.080 "name": "nvme0n1", 00:15:42.080 "aliases": [ 00:15:42.080 "5bd45836-5190-47f3-bb2d-a9ac0d3d6921" 00:15:42.080 ], 00:15:42.080 "product_name": "NVMe disk", 00:15:42.080 "block_size": 4096, 00:15:42.080 "num_blocks": 1310720, 00:15:42.080 "uuid": "5bd45836-5190-47f3-bb2d-a9ac0d3d6921", 00:15:42.080 "numa_id": -1, 00:15:42.080 "assigned_rate_limits": { 00:15:42.080 "rw_ios_per_sec": 0, 00:15:42.080 "rw_mbytes_per_sec": 0, 00:15:42.080 "r_mbytes_per_sec": 0, 00:15:42.080 "w_mbytes_per_sec": 0 00:15:42.080 }, 00:15:42.080 "claimed": true, 00:15:42.080 "claim_type": "read_many_write_one", 00:15:42.080 "zoned": false, 00:15:42.080 "supported_io_types": { 00:15:42.080 "read": true, 00:15:42.080 "write": true, 00:15:42.080 "unmap": true, 00:15:42.080 "flush": true, 00:15:42.080 "reset": true, 00:15:42.080 "nvme_admin": true, 00:15:42.080 "nvme_io": true, 00:15:42.080 "nvme_io_md": false, 00:15:42.080 "write_zeroes": true, 00:15:42.080 "zcopy": false, 00:15:42.080 "get_zone_info": false, 00:15:42.080 "zone_management": false, 00:15:42.080 "zone_append": false, 00:15:42.080 "compare": true, 00:15:42.080 "compare_and_write": false, 00:15:42.080 "abort": true, 00:15:42.080 "seek_hole": false, 00:15:42.080 "seek_data": false, 00:15:42.080 "copy": true, 00:15:42.080 "nvme_iov_md": false 00:15:42.080 }, 00:15:42.080 "driver_specific": { 00:15:42.080 "nvme": [ 00:15:42.080 { 00:15:42.080 "pci_address": "0000:00:11.0", 00:15:42.080 "trid": { 00:15:42.080 "trtype": "PCIe", 00:15:42.080 "traddr": "0000:00:11.0" 00:15:42.080 }, 00:15:42.080 "ctrlr_data": { 00:15:42.080 "cntlid": 0, 00:15:42.080 "vendor_id": "0x1b36", 00:15:42.080 "model_number": "QEMU NVMe Ctrl", 00:15:42.080 "serial_number": "12341", 00:15:42.080 "firmware_revision": "8.0.0", 00:15:42.080 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:42.080 "oacs": { 00:15:42.080 "security": 0, 00:15:42.080 "format": 1, 00:15:42.080 "firmware": 0, 00:15:42.080 "ns_manage": 1 00:15:42.080 }, 00:15:42.080 "multi_ctrlr": false, 00:15:42.080 "ana_reporting": false 00:15:42.080 }, 00:15:42.080 "vs": { 00:15:42.080 "nvme_version": "1.4" 00:15:42.080 }, 00:15:42.080 "ns_data": { 00:15:42.080 "id": 1, 00:15:42.080 "can_share": false 00:15:42.080 } 00:15:42.080 } 00:15:42.080 ], 00:15:42.080 "mp_policy": "active_passive" 00:15:42.080 } 00:15:42.080 } 00:15:42.080 ]' 00:15:42.080 17:19:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:42.080 17:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:15:42.080 17:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:42.340 17:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=1310720 00:15:42.340 17:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:15:42.340 17:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 5120 00:15:42.340 17:19:25 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:15:42.340 17:19:25 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:42.340 17:19:25 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:15:42.340 17:19:25 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:42.340 17:19:25 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:42.340 17:19:25 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=7558edf6-2d0b-4864-bdcc-c1731f4338ff 00:15:42.340 17:19:25 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:15:42.340 17:19:25 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7558edf6-2d0b-4864-bdcc-c1731f4338ff 00:15:42.600 17:19:25 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:42.860 17:19:25 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=13980658-f201-4bff-ab6c-6a4c52aebc0c 00:15:42.860 17:19:25 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 13980658-f201-4bff-ab6c-6a4c52aebc0c 00:15:43.120 17:19:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=9f489a89-84fe-40b3-acd4-39639fe36864 00:15:43.120 17:19:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9f489a89-84fe-40b3-acd4-39639fe36864 00:15:43.120 17:19:25 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:15:43.120 17:19:25 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:43.120 17:19:25 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=9f489a89-84fe-40b3-acd4-39639fe36864 00:15:43.120 17:19:25 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:15:43.120 17:19:25 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 9f489a89-84fe-40b3-acd4-39639fe36864 00:15:43.120 17:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=9f489a89-84fe-40b3-acd4-39639fe36864 00:15:43.120 17:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:43.120 17:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:15:43.120 17:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:15:43.120 17:19:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9f489a89-84fe-40b3-acd4-39639fe36864 00:15:43.380 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:43.380 { 00:15:43.380 "name": "9f489a89-84fe-40b3-acd4-39639fe36864", 00:15:43.380 "aliases": [ 00:15:43.380 "lvs/nvme0n1p0" 00:15:43.380 ], 00:15:43.380 "product_name": "Logical Volume", 00:15:43.380 "block_size": 4096, 00:15:43.380 "num_blocks": 26476544, 00:15:43.380 "uuid": "9f489a89-84fe-40b3-acd4-39639fe36864", 00:15:43.380 "assigned_rate_limits": { 00:15:43.380 "rw_ios_per_sec": 0, 00:15:43.380 "rw_mbytes_per_sec": 0, 00:15:43.380 "r_mbytes_per_sec": 0, 00:15:43.380 "w_mbytes_per_sec": 0 00:15:43.380 }, 00:15:43.380 "claimed": false, 00:15:43.380 "zoned": false, 00:15:43.380 "supported_io_types": { 00:15:43.380 "read": true, 00:15:43.380 "write": true, 00:15:43.380 "unmap": true, 00:15:43.380 "flush": false, 00:15:43.380 "reset": true, 00:15:43.380 "nvme_admin": false, 00:15:43.380 "nvme_io": false, 00:15:43.380 "nvme_io_md": false, 00:15:43.380 "write_zeroes": true, 00:15:43.380 "zcopy": false, 00:15:43.380 "get_zone_info": false, 00:15:43.380 "zone_management": false, 00:15:43.380 "zone_append": false, 00:15:43.380 "compare": false, 00:15:43.380 "compare_and_write": false, 00:15:43.380 "abort": false, 00:15:43.380 "seek_hole": true, 00:15:43.380 "seek_data": true, 00:15:43.380 "copy": false, 00:15:43.380 "nvme_iov_md": false 00:15:43.380 }, 00:15:43.380 "driver_specific": { 00:15:43.380 "lvol": { 00:15:43.380 "lvol_store_uuid": "13980658-f201-4bff-ab6c-6a4c52aebc0c", 00:15:43.380 "base_bdev": "nvme0n1", 00:15:43.380 "thin_provision": true, 00:15:43.380 "num_allocated_clusters": 0, 00:15:43.380 "snapshot": false, 00:15:43.380 "clone": false, 00:15:43.380 "esnap_clone": false 00:15:43.380 } 00:15:43.380 } 00:15:43.380 } 00:15:43.380 ]' 00:15:43.380 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:43.380 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:15:43.380 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:43.380 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:43.380 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:43.380 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:15:43.380 17:19:26 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:15:43.380 17:19:26 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:15:43.380 17:19:26 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:43.641 17:19:26 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:43.641 17:19:26 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:43.641 17:19:26 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 9f489a89-84fe-40b3-acd4-39639fe36864 00:15:43.641 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=9f489a89-84fe-40b3-acd4-39639fe36864 00:15:43.641 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:43.641 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:15:43.641 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:15:43.641 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9f489a89-84fe-40b3-acd4-39639fe36864 00:15:43.901 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:43.901 { 00:15:43.901 "name": "9f489a89-84fe-40b3-acd4-39639fe36864", 00:15:43.901 "aliases": [ 00:15:43.901 "lvs/nvme0n1p0" 00:15:43.901 ], 00:15:43.901 "product_name": "Logical Volume", 00:15:43.901 "block_size": 4096, 00:15:43.901 "num_blocks": 26476544, 00:15:43.901 "uuid": "9f489a89-84fe-40b3-acd4-39639fe36864", 00:15:43.901 "assigned_rate_limits": { 00:15:43.901 "rw_ios_per_sec": 0, 00:15:43.901 "rw_mbytes_per_sec": 0, 00:15:43.901 "r_mbytes_per_sec": 0, 00:15:43.901 "w_mbytes_per_sec": 0 00:15:43.901 }, 00:15:43.901 "claimed": false, 00:15:43.901 "zoned": false, 00:15:43.901 "supported_io_types": { 00:15:43.901 "read": true, 00:15:43.901 "write": true, 00:15:43.901 "unmap": true, 00:15:43.901 "flush": false, 00:15:43.901 "reset": true, 00:15:43.901 "nvme_admin": false, 00:15:43.901 "nvme_io": false, 00:15:43.901 "nvme_io_md": false, 00:15:43.901 "write_zeroes": true, 00:15:43.901 "zcopy": false, 00:15:43.901 "get_zone_info": false, 00:15:43.901 "zone_management": false, 00:15:43.901 "zone_append": false, 00:15:43.901 "compare": false, 00:15:43.901 "compare_and_write": false, 00:15:43.901 "abort": false, 00:15:43.901 "seek_hole": true, 00:15:43.901 "seek_data": true, 00:15:43.901 "copy": false, 00:15:43.901 "nvme_iov_md": false 00:15:43.901 }, 00:15:43.901 "driver_specific": { 00:15:43.901 "lvol": { 00:15:43.901 "lvol_store_uuid": "13980658-f201-4bff-ab6c-6a4c52aebc0c", 00:15:43.902 "base_bdev": "nvme0n1", 00:15:43.902 "thin_provision": true, 00:15:43.902 "num_allocated_clusters": 0, 00:15:43.902 "snapshot": false, 00:15:43.902 "clone": false, 00:15:43.902 "esnap_clone": false 00:15:43.902 } 00:15:43.902 } 00:15:43.902 } 00:15:43.902 ]' 00:15:43.902 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:43.902 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:15:43.902 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:43.902 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:43.902 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:43.902 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:15:43.902 17:19:26 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:15:43.902 17:19:26 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:44.161 17:19:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:15:44.161 17:19:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 9f489a89-84fe-40b3-acd4-39639fe36864 00:15:44.161 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=9f489a89-84fe-40b3-acd4-39639fe36864 00:15:44.161 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:44.161 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:15:44.161 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:15:44.161 17:19:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9f489a89-84fe-40b3-acd4-39639fe36864 00:15:44.161 17:19:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:44.161 { 00:15:44.161 "name": "9f489a89-84fe-40b3-acd4-39639fe36864", 00:15:44.161 "aliases": [ 00:15:44.161 "lvs/nvme0n1p0" 00:15:44.161 ], 00:15:44.161 "product_name": "Logical Volume", 00:15:44.161 "block_size": 4096, 00:15:44.161 "num_blocks": 26476544, 00:15:44.161 "uuid": "9f489a89-84fe-40b3-acd4-39639fe36864", 00:15:44.161 "assigned_rate_limits": { 00:15:44.161 "rw_ios_per_sec": 0, 00:15:44.161 "rw_mbytes_per_sec": 0, 00:15:44.162 "r_mbytes_per_sec": 0, 00:15:44.162 "w_mbytes_per_sec": 0 00:15:44.162 }, 00:15:44.162 "claimed": false, 00:15:44.162 "zoned": false, 00:15:44.162 "supported_io_types": { 00:15:44.162 "read": true, 00:15:44.162 "write": true, 00:15:44.162 "unmap": true, 00:15:44.162 "flush": false, 00:15:44.162 "reset": true, 00:15:44.162 "nvme_admin": false, 00:15:44.162 "nvme_io": false, 00:15:44.162 "nvme_io_md": false, 00:15:44.162 "write_zeroes": true, 00:15:44.162 "zcopy": false, 00:15:44.162 "get_zone_info": false, 00:15:44.162 "zone_management": false, 00:15:44.162 "zone_append": false, 00:15:44.162 "compare": false, 00:15:44.162 "compare_and_write": false, 00:15:44.162 "abort": false, 00:15:44.162 "seek_hole": true, 00:15:44.162 "seek_data": true, 00:15:44.162 "copy": false, 00:15:44.162 "nvme_iov_md": false 00:15:44.162 }, 00:15:44.162 "driver_specific": { 00:15:44.162 "lvol": { 00:15:44.162 "lvol_store_uuid": "13980658-f201-4bff-ab6c-6a4c52aebc0c", 00:15:44.162 "base_bdev": "nvme0n1", 00:15:44.162 "thin_provision": true, 00:15:44.162 "num_allocated_clusters": 0, 00:15:44.162 "snapshot": false, 00:15:44.162 "clone": false, 00:15:44.162 "esnap_clone": false 00:15:44.162 } 00:15:44.162 } 00:15:44.162 } 00:15:44.162 ]' 00:15:44.162 17:19:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:44.424 17:19:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:15:44.424 17:19:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:44.424 17:19:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:44.424 17:19:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:44.424 17:19:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:15:44.424 17:19:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:15:44.424 17:19:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9f489a89-84fe-40b3-acd4-39639fe36864 -c nvc0n1p0 --l2p_dram_limit 20 00:15:44.424 [2024-10-30 17:19:27.363075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.424 [2024-10-30 17:19:27.363113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:44.424 [2024-10-30 17:19:27.363124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:44.424 [2024-10-30 17:19:27.363131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.424 [2024-10-30 17:19:27.363171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.424 [2024-10-30 17:19:27.363180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:44.424 [2024-10-30 17:19:27.363187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:15:44.424 [2024-10-30 17:19:27.363195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.424 [2024-10-30 17:19:27.363220] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:44.424 [2024-10-30 17:19:27.363982] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:44.424 [2024-10-30 17:19:27.364000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.424 [2024-10-30 17:19:27.364010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:44.424 [2024-10-30 17:19:27.364017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:15:44.424 [2024-10-30 17:19:27.364024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.424 [2024-10-30 17:19:27.364072] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 341ff6f4-cc0f-4389-b253-42203b5dcc0e 00:15:44.424 [2024-10-30 17:19:27.365016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.424 [2024-10-30 17:19:27.365038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:44.424 [2024-10-30 17:19:27.365050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:15:44.424 [2024-10-30 17:19:27.365058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.424 [2024-10-30 17:19:27.369668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.424 [2024-10-30 17:19:27.369695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:44.424 [2024-10-30 17:19:27.369703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.577 ms 00:15:44.424 [2024-10-30 17:19:27.369710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.424 [2024-10-30 17:19:27.369784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.424 [2024-10-30 17:19:27.369791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:44.424 [2024-10-30 17:19:27.369803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:15:44.424 [2024-10-30 17:19:27.369809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.424 [2024-10-30 17:19:27.369844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.424 [2024-10-30 17:19:27.369851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:44.424 [2024-10-30 17:19:27.369860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:44.424 [2024-10-30 17:19:27.369867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.424 [2024-10-30 17:19:27.369883] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:44.424 [2024-10-30 17:19:27.372711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.424 [2024-10-30 17:19:27.372736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:44.424 [2024-10-30 17:19:27.372744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.834 ms 00:15:44.424 [2024-10-30 17:19:27.372751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.424 [2024-10-30 17:19:27.372774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.424 [2024-10-30 17:19:27.372787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:44.424 [2024-10-30 17:19:27.372793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:44.424 [2024-10-30 17:19:27.372800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.424 [2024-10-30 17:19:27.372816] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:44.424 [2024-10-30 17:19:27.372924] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:44.424 [2024-10-30 17:19:27.372936] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:44.424 [2024-10-30 17:19:27.372946] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:44.424 [2024-10-30 17:19:27.372953] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:44.424 [2024-10-30 17:19:27.372961] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:44.424 [2024-10-30 17:19:27.372967] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:44.424 [2024-10-30 17:19:27.372975] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:44.424 [2024-10-30 17:19:27.372980] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:44.424 [2024-10-30 17:19:27.372987] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:44.424 [2024-10-30 17:19:27.372993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.424 [2024-10-30 17:19:27.373001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:44.424 [2024-10-30 17:19:27.373007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:15:44.424 [2024-10-30 17:19:27.373016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.424 [2024-10-30 17:19:27.373078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.424 [2024-10-30 17:19:27.373086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:44.424 [2024-10-30 17:19:27.373092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:44.424 [2024-10-30 17:19:27.373100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.424 [2024-10-30 17:19:27.373168] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:44.424 [2024-10-30 17:19:27.373177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:44.425 [2024-10-30 17:19:27.373183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:44.425 [2024-10-30 17:19:27.373190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:44.425 [2024-10-30 17:19:27.373210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:44.425 [2024-10-30 17:19:27.373218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:44.425 [2024-10-30 17:19:27.373223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:44.425 [2024-10-30 17:19:27.373230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:44.425 [2024-10-30 17:19:27.373236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:44.425 [2024-10-30 17:19:27.373242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:44.425 [2024-10-30 17:19:27.373249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:44.425 [2024-10-30 17:19:27.373256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:44.425 [2024-10-30 17:19:27.373263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:44.425 [2024-10-30 17:19:27.373274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:44.425 [2024-10-30 17:19:27.373279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:44.425 [2024-10-30 17:19:27.373288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:44.425 [2024-10-30 17:19:27.373293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:44.425 [2024-10-30 17:19:27.373299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:44.425 [2024-10-30 17:19:27.373304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:44.425 [2024-10-30 17:19:27.373310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:44.425 [2024-10-30 17:19:27.373315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:44.425 [2024-10-30 17:19:27.373321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:44.425 [2024-10-30 17:19:27.373326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:44.425 [2024-10-30 17:19:27.373332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:44.425 [2024-10-30 17:19:27.373337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:44.425 [2024-10-30 17:19:27.373344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:44.425 [2024-10-30 17:19:27.373349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:44.425 [2024-10-30 17:19:27.373355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:44.425 [2024-10-30 17:19:27.373359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:44.425 [2024-10-30 17:19:27.373366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:44.425 [2024-10-30 17:19:27.373370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:44.425 [2024-10-30 17:19:27.373378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:44.425 [2024-10-30 17:19:27.373382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:44.425 [2024-10-30 17:19:27.373388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:44.425 [2024-10-30 17:19:27.373393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:44.425 [2024-10-30 17:19:27.373400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:44.425 [2024-10-30 17:19:27.373405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:44.425 [2024-10-30 17:19:27.373411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:44.425 [2024-10-30 17:19:27.373416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:44.425 [2024-10-30 17:19:27.373422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:44.425 [2024-10-30 17:19:27.373427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:44.425 [2024-10-30 17:19:27.373434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:44.425 [2024-10-30 17:19:27.373439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:44.425 [2024-10-30 17:19:27.373445] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:44.425 [2024-10-30 17:19:27.373451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:44.425 [2024-10-30 17:19:27.373458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:44.425 [2024-10-30 17:19:27.373463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:44.425 [2024-10-30 17:19:27.373471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:44.425 [2024-10-30 17:19:27.373476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:44.425 [2024-10-30 17:19:27.373482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:44.425 [2024-10-30 17:19:27.373487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:44.425 [2024-10-30 17:19:27.373493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:44.425 [2024-10-30 17:19:27.373498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:44.425 [2024-10-30 17:19:27.373508] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:44.425 [2024-10-30 17:19:27.373514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:44.425 [2024-10-30 17:19:27.373522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:44.425 [2024-10-30 17:19:27.373528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:44.425 [2024-10-30 17:19:27.373535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:44.425 [2024-10-30 17:19:27.373540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:44.425 [2024-10-30 17:19:27.373547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:44.425 [2024-10-30 17:19:27.373552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:44.425 [2024-10-30 17:19:27.373558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:44.425 [2024-10-30 17:19:27.373563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:44.425 [2024-10-30 17:19:27.373571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:44.425 [2024-10-30 17:19:27.373577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:44.425 [2024-10-30 17:19:27.373584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:44.425 [2024-10-30 17:19:27.373589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:44.425 [2024-10-30 17:19:27.373596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:44.425 [2024-10-30 17:19:27.373601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:44.425 [2024-10-30 17:19:27.373607] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:44.425 [2024-10-30 17:19:27.373613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:44.425 [2024-10-30 17:19:27.373621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:44.425 [2024-10-30 17:19:27.373626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:44.425 [2024-10-30 17:19:27.373632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:44.425 [2024-10-30 17:19:27.373638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:44.425 [2024-10-30 17:19:27.373645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.425 [2024-10-30 17:19:27.373650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:44.425 [2024-10-30 17:19:27.373659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:15:44.425 [2024-10-30 17:19:27.373665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.425 [2024-10-30 17:19:27.373691] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:44.425 [2024-10-30 17:19:27.373698] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:48.632 [2024-10-30 17:19:31.370282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.632 [2024-10-30 17:19:31.370364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:48.632 [2024-10-30 17:19:31.370385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3996.570 ms 00:15:48.632 [2024-10-30 17:19:31.370400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.632 [2024-10-30 17:19:31.401832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.632 [2024-10-30 17:19:31.401893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:48.632 [2024-10-30 17:19:31.401914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.208 ms 00:15:48.632 [2024-10-30 17:19:31.401923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.632 [2024-10-30 17:19:31.402063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.632 [2024-10-30 17:19:31.402073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:48.632 [2024-10-30 17:19:31.402088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:15:48.632 [2024-10-30 17:19:31.402098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.632 [2024-10-30 17:19:31.450559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.632 [2024-10-30 17:19:31.450612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:48.632 [2024-10-30 17:19:31.450630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.423 ms 00:15:48.632 [2024-10-30 17:19:31.450639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.632 [2024-10-30 17:19:31.450681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.632 [2024-10-30 17:19:31.450690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:48.632 [2024-10-30 17:19:31.450702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:48.632 [2024-10-30 17:19:31.450713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.632 [2024-10-30 17:19:31.451331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.632 [2024-10-30 17:19:31.451367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:48.632 [2024-10-30 17:19:31.451381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:15:48.632 [2024-10-30 17:19:31.451391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.632 [2024-10-30 17:19:31.451512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.632 [2024-10-30 17:19:31.451524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:48.632 [2024-10-30 17:19:31.451538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:15:48.632 [2024-10-30 17:19:31.451547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.632 [2024-10-30 17:19:31.467292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.632 [2024-10-30 17:19:31.467334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:48.632 [2024-10-30 17:19:31.467348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.724 ms 00:15:48.632 [2024-10-30 17:19:31.467356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.632 [2024-10-30 17:19:31.481136] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:15:48.632 [2024-10-30 17:19:31.489081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.632 [2024-10-30 17:19:31.489133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:48.632 [2024-10-30 17:19:31.489144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.639 ms 00:15:48.632 [2024-10-30 17:19:31.489155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.632 [2024-10-30 17:19:31.598360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.633 [2024-10-30 17:19:31.598427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:48.633 [2024-10-30 17:19:31.598442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.174 ms 00:15:48.633 [2024-10-30 17:19:31.598455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.633 [2024-10-30 17:19:31.598660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.633 [2024-10-30 17:19:31.598679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:48.633 [2024-10-30 17:19:31.598690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:15:48.633 [2024-10-30 17:19:31.598701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-10-30 17:19:31.625100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-10-30 17:19:31.625319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:48.895 [2024-10-30 17:19:31.625706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.349 ms 00:15:48.895 [2024-10-30 17:19:31.625767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-10-30 17:19:31.651613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-10-30 17:19:31.651797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:48.895 [2024-10-30 17:19:31.652228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.553 ms 00:15:48.895 [2024-10-30 17:19:31.652343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-10-30 17:19:31.653338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-10-30 17:19:31.653536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:48.895 [2024-10-30 17:19:31.653622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:15:48.895 [2024-10-30 17:19:31.653652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-10-30 17:19:31.740297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-10-30 17:19:31.740509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:48.895 [2024-10-30 17:19:31.740533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.491 ms 00:15:48.895 [2024-10-30 17:19:31.740546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-10-30 17:19:31.768308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-10-30 17:19:31.768362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:48.895 [2024-10-30 17:19:31.768375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.598 ms 00:15:48.895 [2024-10-30 17:19:31.768387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-10-30 17:19:31.794518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-10-30 17:19:31.794570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:48.895 [2024-10-30 17:19:31.794582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.080 ms 00:15:48.895 [2024-10-30 17:19:31.794593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-10-30 17:19:31.820930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-10-30 17:19:31.820981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:48.895 [2024-10-30 17:19:31.820994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.292 ms 00:15:48.895 [2024-10-30 17:19:31.821005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-10-30 17:19:31.821055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-10-30 17:19:31.821074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:48.895 [2024-10-30 17:19:31.821083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:48.895 [2024-10-30 17:19:31.821094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-10-30 17:19:31.821186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-10-30 17:19:31.821226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:48.895 [2024-10-30 17:19:31.821237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:15:48.895 [2024-10-30 17:19:31.821248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-10-30 17:19:31.822492] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4458.894 ms, result 0 00:15:48.895 { 00:15:48.895 "name": "ftl0", 00:15:48.895 "uuid": "341ff6f4-cc0f-4389-b253-42203b5dcc0e" 00:15:48.895 } 00:15:48.895 17:19:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:15:48.895 17:19:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:15:48.895 17:19:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:15:49.156 17:19:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:15:49.418 [2024-10-30 17:19:32.170272] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:49.418 I/O size of 69632 is greater than zero copy threshold (65536). 00:15:49.418 Zero copy mechanism will not be used. 00:15:49.418 Running I/O for 4 seconds... 00:15:51.307 790.00 IOPS, 52.46 MiB/s [2024-10-30T17:19:35.230Z] 728.50 IOPS, 48.38 MiB/s [2024-10-30T17:19:36.617Z] 763.33 IOPS, 50.69 MiB/s [2024-10-30T17:19:36.617Z] 742.75 IOPS, 49.32 MiB/s 00:15:53.636 Latency(us) 00:15:53.636 [2024-10-30T17:19:36.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.636 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:15:53.636 ftl0 : 4.00 742.67 49.32 0.00 0.00 1421.80 261.51 6956.90 00:15:53.636 [2024-10-30T17:19:36.617Z] =================================================================================================================== 00:15:53.636 [2024-10-30T17:19:36.617Z] Total : 742.67 49.32 0.00 0.00 1421.80 261.51 6956.90 00:15:53.636 { 00:15:53.636 "results": [ 00:15:53.636 { 00:15:53.636 "job": "ftl0", 00:15:53.636 "core_mask": "0x1", 00:15:53.636 "workload": "randwrite", 00:15:53.636 "status": "finished", 00:15:53.636 "queue_depth": 1, 00:15:53.636 "io_size": 69632, 00:15:53.636 "runtime": 4.001771, 00:15:53.636 "iops": 742.6711823340216, 00:15:53.636 "mibps": 49.31800820186862, 00:15:53.636 "io_failed": 0, 00:15:53.636 "io_timeout": 0, 00:15:53.636 "avg_latency_us": 1421.8015778030851, 00:15:53.636 "min_latency_us": 261.51384615384615, 00:15:53.636 "max_latency_us": 6956.898461538462 00:15:53.636 } 00:15:53.636 ], 00:15:53.636 "core_count": 1 00:15:53.636 } 00:15:53.636 [2024-10-30 17:19:36.179619] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:53.636 17:19:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:15:53.636 [2024-10-30 17:19:36.296304] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:53.636 Running I/O for 4 seconds... 00:15:55.521 5889.00 IOPS, 23.00 MiB/s [2024-10-30T17:19:39.445Z] 5716.00 IOPS, 22.33 MiB/s [2024-10-30T17:19:40.388Z] 5403.33 IOPS, 21.11 MiB/s [2024-10-30T17:19:40.388Z] 5135.75 IOPS, 20.06 MiB/s 00:15:57.407 Latency(us) 00:15:57.407 [2024-10-30T17:19:40.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.407 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:15:57.407 ftl0 : 4.04 5117.40 19.99 0.00 0.00 24896.82 277.27 52428.80 00:15:57.407 [2024-10-30T17:19:40.388Z] =================================================================================================================== 00:15:57.407 [2024-10-30T17:19:40.388Z] Total : 5117.40 19.99 0.00 0.00 24896.82 0.00 52428.80 00:15:57.407 [2024-10-30 17:19:40.344060] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:57.407 { 00:15:57.407 "results": [ 00:15:57.407 { 00:15:57.407 "job": "ftl0", 00:15:57.407 "core_mask": "0x1", 00:15:57.407 "workload": "randwrite", 00:15:57.407 "status": "finished", 00:15:57.407 "queue_depth": 128, 00:15:57.407 "io_size": 4096, 00:15:57.407 "runtime": 4.0376, 00:15:57.407 "iops": 5117.396473152367, 00:15:57.407 "mibps": 19.989829973251435, 00:15:57.407 "io_failed": 0, 00:15:57.407 "io_timeout": 0, 00:15:57.407 "avg_latency_us": 24896.81599845126, 00:15:57.407 "min_latency_us": 277.2676923076923, 00:15:57.407 "max_latency_us": 52428.8 00:15:57.407 } 00:15:57.407 ], 00:15:57.407 "core_count": 1 00:15:57.407 } 00:15:57.407 17:19:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:15:57.668 [2024-10-30 17:19:40.465895] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:57.668 Running I/O for 4 seconds... 00:15:59.557 4205.00 IOPS, 16.43 MiB/s [2024-10-30T17:19:43.485Z] 4335.00 IOPS, 16.93 MiB/s [2024-10-30T17:19:44.928Z] 4334.33 IOPS, 16.93 MiB/s [2024-10-30T17:19:44.928Z] 4333.25 IOPS, 16.93 MiB/s 00:16:01.947 Latency(us) 00:16:01.947 [2024-10-30T17:19:44.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.947 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:01.947 Verification LBA range: start 0x0 length 0x1400000 00:16:01.947 ftl0 : 4.01 4349.01 16.99 0.00 0.00 29353.79 475.77 42547.99 00:16:01.947 [2024-10-30T17:19:44.928Z] =================================================================================================================== 00:16:01.947 [2024-10-30T17:19:44.928Z] Total : 4349.01 16.99 0.00 0.00 29353.79 0.00 42547.99 00:16:01.947 [2024-10-30 17:19:44.497822] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:01.947 { 00:16:01.947 "results": [ 00:16:01.947 { 00:16:01.947 "job": "ftl0", 00:16:01.947 "core_mask": "0x1", 00:16:01.947 "workload": "verify", 00:16:01.947 "status": "finished", 00:16:01.947 "verify_range": { 00:16:01.947 "start": 0, 00:16:01.947 "length": 20971520 00:16:01.947 }, 00:16:01.947 "queue_depth": 128, 00:16:01.947 "io_size": 4096, 00:16:01.947 "runtime": 4.014938, 00:16:01.947 "iops": 4349.008627281418, 00:16:01.947 "mibps": 16.98831495031804, 00:16:01.947 "io_failed": 0, 00:16:01.947 "io_timeout": 0, 00:16:01.947 "avg_latency_us": 29353.79463084765, 00:16:01.947 "min_latency_us": 475.7661538461538, 00:16:01.947 "max_latency_us": 42547.987692307695 00:16:01.947 } 00:16:01.947 ], 00:16:01.947 "core_count": 1 00:16:01.947 } 00:16:01.947 17:19:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:01.947 [2024-10-30 17:19:44.712704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.947 [2024-10-30 17:19:44.712961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:01.947 [2024-10-30 17:19:44.712987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:01.947 [2024-10-30 17:19:44.713002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.947 [2024-10-30 17:19:44.713034] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:01.947 [2024-10-30 17:19:44.716155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.947 [2024-10-30 17:19:44.716347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:01.947 [2024-10-30 17:19:44.716376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.097 ms 00:16:01.947 [2024-10-30 17:19:44.716385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.947 [2024-10-30 17:19:44.719437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.947 [2024-10-30 17:19:44.719610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:01.947 [2024-10-30 17:19:44.719637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.010 ms 00:16:01.947 [2024-10-30 17:19:44.719646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.210 [2024-10-30 17:19:44.948525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.210 [2024-10-30 17:19:44.948583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:02.210 [2024-10-30 17:19:44.948603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 228.847 ms 00:16:02.210 [2024-10-30 17:19:44.948611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.210 [2024-10-30 17:19:44.954800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.210 [2024-10-30 17:19:44.954990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:02.210 [2024-10-30 17:19:44.955020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.136 ms 00:16:02.210 [2024-10-30 17:19:44.955030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.210 [2024-10-30 17:19:44.982061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.210 [2024-10-30 17:19:44.982267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:02.210 [2024-10-30 17:19:44.982295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.961 ms 00:16:02.210 [2024-10-30 17:19:44.982304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.210 [2024-10-30 17:19:45.000880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.210 [2024-10-30 17:19:45.000927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:02.210 [2024-10-30 17:19:45.000948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.526 ms 00:16:02.210 [2024-10-30 17:19:45.000960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.210 [2024-10-30 17:19:45.001130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.210 [2024-10-30 17:19:45.001142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:02.210 [2024-10-30 17:19:45.001157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:16:02.210 [2024-10-30 17:19:45.001164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.210 [2024-10-30 17:19:45.028224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.210 [2024-10-30 17:19:45.028417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:02.210 [2024-10-30 17:19:45.028443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.037 ms 00:16:02.210 [2024-10-30 17:19:45.028451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.210 [2024-10-30 17:19:45.055097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.210 [2024-10-30 17:19:45.055147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:02.210 [2024-10-30 17:19:45.055162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.566 ms 00:16:02.210 [2024-10-30 17:19:45.055169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.210 [2024-10-30 17:19:45.080904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.210 [2024-10-30 17:19:45.080951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:02.210 [2024-10-30 17:19:45.080965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.655 ms 00:16:02.210 [2024-10-30 17:19:45.080973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.210 [2024-10-30 17:19:45.106842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.210 [2024-10-30 17:19:45.107036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:02.210 [2024-10-30 17:19:45.107067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.759 ms 00:16:02.210 [2024-10-30 17:19:45.107074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.210 [2024-10-30 17:19:45.107150] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:02.210 [2024-10-30 17:19:45.107166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:02.210 [2024-10-30 17:19:45.107610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.107994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.108001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.108011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.108018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.108031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.108039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.108049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.108058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.108068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.108075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.108085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:02.211 [2024-10-30 17:19:45.108101] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:02.211 [2024-10-30 17:19:45.108112] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 341ff6f4-cc0f-4389-b253-42203b5dcc0e 00:16:02.211 [2024-10-30 17:19:45.108120] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:02.211 [2024-10-30 17:19:45.108129] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:02.211 [2024-10-30 17:19:45.108137] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:02.211 [2024-10-30 17:19:45.108146] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:02.211 [2024-10-30 17:19:45.108156] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:02.211 [2024-10-30 17:19:45.108166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:02.211 [2024-10-30 17:19:45.108173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:02.211 [2024-10-30 17:19:45.108184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:02.211 [2024-10-30 17:19:45.108190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:02.211 [2024-10-30 17:19:45.108226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.211 [2024-10-30 17:19:45.108235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:02.211 [2024-10-30 17:19:45.108263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:16:02.211 [2024-10-30 17:19:45.108271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.211 [2024-10-30 17:19:45.122141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.211 [2024-10-30 17:19:45.122340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:02.211 [2024-10-30 17:19:45.122367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.808 ms 00:16:02.211 [2024-10-30 17:19:45.122375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.211 [2024-10-30 17:19:45.122757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.211 [2024-10-30 17:19:45.122776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:02.211 [2024-10-30 17:19:45.122787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:16:02.211 [2024-10-30 17:19:45.122795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.211 [2024-10-30 17:19:45.161877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.211 [2024-10-30 17:19:45.162065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:02.211 [2024-10-30 17:19:45.162092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.211 [2024-10-30 17:19:45.162100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.211 [2024-10-30 17:19:45.162176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.211 [2024-10-30 17:19:45.162185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:02.211 [2024-10-30 17:19:45.162195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.211 [2024-10-30 17:19:45.162230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.211 [2024-10-30 17:19:45.162325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.211 [2024-10-30 17:19:45.162336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:02.211 [2024-10-30 17:19:45.162349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.211 [2024-10-30 17:19:45.162357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.211 [2024-10-30 17:19:45.162376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.211 [2024-10-30 17:19:45.162385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:02.211 [2024-10-30 17:19:45.162395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.211 [2024-10-30 17:19:45.162402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.473 [2024-10-30 17:19:45.248546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.473 [2024-10-30 17:19:45.248770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:02.473 [2024-10-30 17:19:45.248805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.473 [2024-10-30 17:19:45.248814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.473 [2024-10-30 17:19:45.319673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.473 [2024-10-30 17:19:45.319869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:02.473 [2024-10-30 17:19:45.319895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.473 [2024-10-30 17:19:45.319904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.473 [2024-10-30 17:19:45.320020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.473 [2024-10-30 17:19:45.320031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:02.473 [2024-10-30 17:19:45.320042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.473 [2024-10-30 17:19:45.320053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.473 [2024-10-30 17:19:45.320100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.473 [2024-10-30 17:19:45.320109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:02.473 [2024-10-30 17:19:45.320120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.473 [2024-10-30 17:19:45.320129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.473 [2024-10-30 17:19:45.320270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.473 [2024-10-30 17:19:45.320283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:02.473 [2024-10-30 17:19:45.320297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.473 [2024-10-30 17:19:45.320305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.473 [2024-10-30 17:19:45.320348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.473 [2024-10-30 17:19:45.320358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:02.473 [2024-10-30 17:19:45.320369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.473 [2024-10-30 17:19:45.320377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.473 [2024-10-30 17:19:45.320419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.473 [2024-10-30 17:19:45.320428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:02.473 [2024-10-30 17:19:45.320439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.473 [2024-10-30 17:19:45.320447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.473 [2024-10-30 17:19:45.320500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.473 [2024-10-30 17:19:45.320517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:02.473 [2024-10-30 17:19:45.320529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.473 [2024-10-30 17:19:45.320538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.473 [2024-10-30 17:19:45.320680] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 607.928 ms, result 0 00:16:02.473 true 00:16:02.473 17:19:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73110 00:16:02.473 17:19:45 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 73110 ']' 00:16:02.473 17:19:45 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # kill -0 73110 00:16:02.473 17:19:45 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # uname 00:16:02.473 17:19:45 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:02.473 17:19:45 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73110 00:16:02.473 killing process with pid 73110 00:16:02.473 Received shutdown signal, test time was about 4.000000 seconds 00:16:02.473 00:16:02.473 Latency(us) 00:16:02.473 [2024-10-30T17:19:45.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.473 [2024-10-30T17:19:45.454Z] =================================================================================================================== 00:16:02.473 [2024-10-30T17:19:45.454Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:02.473 17:19:45 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:02.473 17:19:45 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:02.473 17:19:45 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73110' 00:16:02.473 17:19:45 ftl.ftl_bdevperf -- common/autotest_common.sh@971 -- # kill 73110 00:16:02.473 17:19:45 ftl.ftl_bdevperf -- common/autotest_common.sh@976 -- # wait 73110 00:16:03.417 Remove shared memory files 00:16:03.417 17:19:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:03.417 17:19:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:03.417 17:19:46 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:03.417 17:19:46 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:03.417 17:19:46 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:03.417 17:19:46 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:03.417 17:19:46 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:03.417 17:19:46 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:03.417 ************************************ 00:16:03.417 END TEST ftl_bdevperf 00:16:03.417 ************************************ 00:16:03.417 00:16:03.417 real 0m22.758s 00:16:03.417 user 0m25.351s 00:16:03.417 sys 0m0.944s 00:16:03.417 17:19:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:03.417 17:19:46 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:03.417 17:19:46 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:03.417 17:19:46 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:03.417 17:19:46 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:03.417 17:19:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:03.417 ************************************ 00:16:03.417 START TEST ftl_trim 00:16:03.417 ************************************ 00:16:03.417 17:19:46 ftl.ftl_trim -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:03.417 * Looking for test storage... 00:16:03.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:03.417 17:19:46 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:03.417 17:19:46 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:16:03.417 17:19:46 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:03.417 17:19:46 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:03.417 17:19:46 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:03.417 17:19:46 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:03.417 17:19:46 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:03.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.417 --rc genhtml_branch_coverage=1 00:16:03.417 --rc genhtml_function_coverage=1 00:16:03.417 --rc genhtml_legend=1 00:16:03.417 --rc geninfo_all_blocks=1 00:16:03.417 --rc geninfo_unexecuted_blocks=1 00:16:03.417 00:16:03.417 ' 00:16:03.417 17:19:46 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:03.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.417 --rc genhtml_branch_coverage=1 00:16:03.417 --rc genhtml_function_coverage=1 00:16:03.417 --rc genhtml_legend=1 00:16:03.417 --rc geninfo_all_blocks=1 00:16:03.417 --rc geninfo_unexecuted_blocks=1 00:16:03.417 00:16:03.417 ' 00:16:03.417 17:19:46 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:03.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.417 --rc genhtml_branch_coverage=1 00:16:03.417 --rc genhtml_function_coverage=1 00:16:03.417 --rc genhtml_legend=1 00:16:03.417 --rc geninfo_all_blocks=1 00:16:03.417 --rc geninfo_unexecuted_blocks=1 00:16:03.417 00:16:03.417 ' 00:16:03.417 17:19:46 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:03.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.417 --rc genhtml_branch_coverage=1 00:16:03.417 --rc genhtml_function_coverage=1 00:16:03.417 --rc genhtml_legend=1 00:16:03.417 --rc geninfo_all_blocks=1 00:16:03.417 --rc geninfo_unexecuted_blocks=1 00:16:03.417 00:16:03.417 ' 00:16:03.417 17:19:46 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:03.417 17:19:46 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:03.417 17:19:46 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:03.417 17:19:46 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:03.417 17:19:46 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:03.417 17:19:46 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:03.417 17:19:46 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:03.417 17:19:46 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:03.417 17:19:46 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:03.417 17:19:46 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:03.417 17:19:46 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:03.417 17:19:46 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:03.417 17:19:46 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:03.417 17:19:46 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:03.417 17:19:46 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:03.417 17:19:46 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:03.417 17:19:46 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:03.417 17:19:46 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73470 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:03.418 17:19:46 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73470 00:16:03.418 17:19:46 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 73470 ']' 00:16:03.418 17:19:46 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.418 17:19:46 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:03.418 17:19:46 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.418 17:19:46 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:03.418 17:19:46 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:03.680 [2024-10-30 17:19:46.476069] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:16:03.680 [2024-10-30 17:19:46.476968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73470 ] 00:16:03.680 [2024-10-30 17:19:46.641435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:03.941 [2024-10-30 17:19:46.765919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.941 [2024-10-30 17:19:46.766355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.941 [2024-10-30 17:19:46.766467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.513 17:19:47 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:04.513 17:19:47 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:16:04.513 17:19:47 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:04.513 17:19:47 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:04.514 17:19:47 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:04.514 17:19:47 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:04.514 17:19:47 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:04.514 17:19:47 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:05.086 17:19:47 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:05.086 17:19:47 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:05.086 17:19:47 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:05.086 17:19:47 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:16:05.086 17:19:47 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:05.086 17:19:47 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:16:05.086 17:19:47 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:16:05.086 17:19:47 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:05.086 17:19:47 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:05.086 { 00:16:05.086 "name": "nvme0n1", 00:16:05.086 "aliases": [ 00:16:05.086 "3faa7f7d-810c-4174-8b37-f6cfef1c94bb" 00:16:05.086 ], 00:16:05.086 "product_name": "NVMe disk", 00:16:05.086 "block_size": 4096, 00:16:05.086 "num_blocks": 1310720, 00:16:05.086 "uuid": "3faa7f7d-810c-4174-8b37-f6cfef1c94bb", 00:16:05.086 "numa_id": -1, 00:16:05.086 "assigned_rate_limits": { 00:16:05.086 "rw_ios_per_sec": 0, 00:16:05.086 "rw_mbytes_per_sec": 0, 00:16:05.086 "r_mbytes_per_sec": 0, 00:16:05.086 "w_mbytes_per_sec": 0 00:16:05.086 }, 00:16:05.086 "claimed": true, 00:16:05.086 "claim_type": "read_many_write_one", 00:16:05.086 "zoned": false, 00:16:05.086 "supported_io_types": { 00:16:05.086 "read": true, 00:16:05.086 "write": true, 00:16:05.086 "unmap": true, 00:16:05.086 "flush": true, 00:16:05.086 "reset": true, 00:16:05.087 "nvme_admin": true, 00:16:05.087 "nvme_io": true, 00:16:05.087 "nvme_io_md": false, 00:16:05.087 "write_zeroes": true, 00:16:05.087 "zcopy": false, 00:16:05.087 "get_zone_info": false, 00:16:05.087 "zone_management": false, 00:16:05.087 "zone_append": false, 00:16:05.087 "compare": true, 00:16:05.087 "compare_and_write": false, 00:16:05.087 "abort": true, 00:16:05.087 "seek_hole": false, 00:16:05.087 "seek_data": false, 00:16:05.087 "copy": true, 00:16:05.087 "nvme_iov_md": false 00:16:05.087 }, 00:16:05.087 "driver_specific": { 00:16:05.087 "nvme": [ 00:16:05.087 { 00:16:05.087 "pci_address": "0000:00:11.0", 00:16:05.087 "trid": { 00:16:05.087 "trtype": "PCIe", 00:16:05.087 "traddr": "0000:00:11.0" 00:16:05.087 }, 00:16:05.087 "ctrlr_data": { 00:16:05.087 "cntlid": 0, 00:16:05.087 "vendor_id": "0x1b36", 00:16:05.087 "model_number": "QEMU NVMe Ctrl", 00:16:05.087 "serial_number": "12341", 00:16:05.087 "firmware_revision": "8.0.0", 00:16:05.087 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:05.087 "oacs": { 00:16:05.087 "security": 0, 00:16:05.087 "format": 1, 00:16:05.087 "firmware": 0, 00:16:05.087 "ns_manage": 1 00:16:05.087 }, 00:16:05.087 "multi_ctrlr": false, 00:16:05.087 "ana_reporting": false 00:16:05.087 }, 00:16:05.087 "vs": { 00:16:05.087 "nvme_version": "1.4" 00:16:05.087 }, 00:16:05.087 "ns_data": { 00:16:05.087 "id": 1, 00:16:05.087 "can_share": false 00:16:05.087 } 00:16:05.087 } 00:16:05.087 ], 00:16:05.087 "mp_policy": "active_passive" 00:16:05.087 } 00:16:05.087 } 00:16:05.087 ]' 00:16:05.087 17:19:47 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:05.087 17:19:48 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:16:05.087 17:19:48 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:05.087 17:19:48 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=1310720 00:16:05.087 17:19:48 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:16:05.087 17:19:48 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 5120 00:16:05.087 17:19:48 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:05.087 17:19:48 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:05.087 17:19:48 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:05.087 17:19:48 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:05.087 17:19:48 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:05.348 17:19:48 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=13980658-f201-4bff-ab6c-6a4c52aebc0c 00:16:05.348 17:19:48 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:05.348 17:19:48 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 13980658-f201-4bff-ab6c-6a4c52aebc0c 00:16:05.609 17:19:48 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:05.870 17:19:48 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=1104e65f-3b09-4d84-93bf-2c8e3d81e413 00:16:05.870 17:19:48 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1104e65f-3b09-4d84-93bf-2c8e3d81e413 00:16:06.131 17:19:48 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=f0bea3aa-91c0-44b9-add4-d7107cd4ccb4 00:16:06.131 17:19:48 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f0bea3aa-91c0-44b9-add4-d7107cd4ccb4 00:16:06.131 17:19:48 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:06.131 17:19:48 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:06.131 17:19:48 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=f0bea3aa-91c0-44b9-add4-d7107cd4ccb4 00:16:06.131 17:19:48 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:06.131 17:19:48 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size f0bea3aa-91c0-44b9-add4-d7107cd4ccb4 00:16:06.131 17:19:48 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=f0bea3aa-91c0-44b9-add4-d7107cd4ccb4 00:16:06.131 17:19:48 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:06.131 17:19:48 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:16:06.131 17:19:48 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:16:06.131 17:19:48 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f0bea3aa-91c0-44b9-add4-d7107cd4ccb4 00:16:06.131 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:06.131 { 00:16:06.131 "name": "f0bea3aa-91c0-44b9-add4-d7107cd4ccb4", 00:16:06.131 "aliases": [ 00:16:06.131 "lvs/nvme0n1p0" 00:16:06.131 ], 00:16:06.131 "product_name": "Logical Volume", 00:16:06.131 "block_size": 4096, 00:16:06.131 "num_blocks": 26476544, 00:16:06.131 "uuid": "f0bea3aa-91c0-44b9-add4-d7107cd4ccb4", 00:16:06.131 "assigned_rate_limits": { 00:16:06.131 "rw_ios_per_sec": 0, 00:16:06.131 "rw_mbytes_per_sec": 0, 00:16:06.131 "r_mbytes_per_sec": 0, 00:16:06.131 "w_mbytes_per_sec": 0 00:16:06.131 }, 00:16:06.131 "claimed": false, 00:16:06.131 "zoned": false, 00:16:06.132 "supported_io_types": { 00:16:06.132 "read": true, 00:16:06.132 "write": true, 00:16:06.132 "unmap": true, 00:16:06.132 "flush": false, 00:16:06.132 "reset": true, 00:16:06.132 "nvme_admin": false, 00:16:06.132 "nvme_io": false, 00:16:06.132 "nvme_io_md": false, 00:16:06.132 "write_zeroes": true, 00:16:06.132 "zcopy": false, 00:16:06.132 "get_zone_info": false, 00:16:06.132 "zone_management": false, 00:16:06.132 "zone_append": false, 00:16:06.132 "compare": false, 00:16:06.132 "compare_and_write": false, 00:16:06.132 "abort": false, 00:16:06.132 "seek_hole": true, 00:16:06.132 "seek_data": true, 00:16:06.132 "copy": false, 00:16:06.132 "nvme_iov_md": false 00:16:06.132 }, 00:16:06.132 "driver_specific": { 00:16:06.132 "lvol": { 00:16:06.132 "lvol_store_uuid": "1104e65f-3b09-4d84-93bf-2c8e3d81e413", 00:16:06.132 "base_bdev": "nvme0n1", 00:16:06.132 "thin_provision": true, 00:16:06.132 "num_allocated_clusters": 0, 00:16:06.132 "snapshot": false, 00:16:06.132 "clone": false, 00:16:06.132 "esnap_clone": false 00:16:06.132 } 00:16:06.132 } 00:16:06.132 } 00:16:06.132 ]' 00:16:06.132 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:06.393 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:16:06.393 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:06.393 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:06.393 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:06.393 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:16:06.393 17:19:49 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:06.393 17:19:49 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:06.393 17:19:49 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:06.654 17:19:49 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:06.655 17:19:49 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:06.655 17:19:49 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size f0bea3aa-91c0-44b9-add4-d7107cd4ccb4 00:16:06.655 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=f0bea3aa-91c0-44b9-add4-d7107cd4ccb4 00:16:06.655 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:06.655 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:16:06.655 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:16:06.655 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f0bea3aa-91c0-44b9-add4-d7107cd4ccb4 00:16:06.655 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:06.655 { 00:16:06.655 "name": "f0bea3aa-91c0-44b9-add4-d7107cd4ccb4", 00:16:06.655 "aliases": [ 00:16:06.655 "lvs/nvme0n1p0" 00:16:06.655 ], 00:16:06.655 "product_name": "Logical Volume", 00:16:06.655 "block_size": 4096, 00:16:06.655 "num_blocks": 26476544, 00:16:06.655 "uuid": "f0bea3aa-91c0-44b9-add4-d7107cd4ccb4", 00:16:06.655 "assigned_rate_limits": { 00:16:06.655 "rw_ios_per_sec": 0, 00:16:06.655 "rw_mbytes_per_sec": 0, 00:16:06.655 "r_mbytes_per_sec": 0, 00:16:06.655 "w_mbytes_per_sec": 0 00:16:06.655 }, 00:16:06.655 "claimed": false, 00:16:06.655 "zoned": false, 00:16:06.655 "supported_io_types": { 00:16:06.655 "read": true, 00:16:06.655 "write": true, 00:16:06.655 "unmap": true, 00:16:06.655 "flush": false, 00:16:06.655 "reset": true, 00:16:06.655 "nvme_admin": false, 00:16:06.655 "nvme_io": false, 00:16:06.655 "nvme_io_md": false, 00:16:06.655 "write_zeroes": true, 00:16:06.655 "zcopy": false, 00:16:06.655 "get_zone_info": false, 00:16:06.655 "zone_management": false, 00:16:06.655 "zone_append": false, 00:16:06.655 "compare": false, 00:16:06.655 "compare_and_write": false, 00:16:06.655 "abort": false, 00:16:06.655 "seek_hole": true, 00:16:06.655 "seek_data": true, 00:16:06.655 "copy": false, 00:16:06.655 "nvme_iov_md": false 00:16:06.655 }, 00:16:06.655 "driver_specific": { 00:16:06.655 "lvol": { 00:16:06.655 "lvol_store_uuid": "1104e65f-3b09-4d84-93bf-2c8e3d81e413", 00:16:06.655 "base_bdev": "nvme0n1", 00:16:06.655 "thin_provision": true, 00:16:06.655 "num_allocated_clusters": 0, 00:16:06.655 "snapshot": false, 00:16:06.655 "clone": false, 00:16:06.655 "esnap_clone": false 00:16:06.655 } 00:16:06.655 } 00:16:06.655 } 00:16:06.655 ]' 00:16:06.655 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:06.915 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:16:06.915 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:06.915 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:06.915 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:06.915 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:16:06.915 17:19:49 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:06.915 17:19:49 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:06.915 17:19:49 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:06.915 17:19:49 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:06.915 17:19:49 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size f0bea3aa-91c0-44b9-add4-d7107cd4ccb4 00:16:06.915 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=f0bea3aa-91c0-44b9-add4-d7107cd4ccb4 00:16:06.915 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:06.915 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:16:06.915 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:16:06.915 17:19:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f0bea3aa-91c0-44b9-add4-d7107cd4ccb4 00:16:07.176 17:19:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:07.176 { 00:16:07.176 "name": "f0bea3aa-91c0-44b9-add4-d7107cd4ccb4", 00:16:07.176 "aliases": [ 00:16:07.176 "lvs/nvme0n1p0" 00:16:07.176 ], 00:16:07.176 "product_name": "Logical Volume", 00:16:07.176 "block_size": 4096, 00:16:07.176 "num_blocks": 26476544, 00:16:07.176 "uuid": "f0bea3aa-91c0-44b9-add4-d7107cd4ccb4", 00:16:07.176 "assigned_rate_limits": { 00:16:07.176 "rw_ios_per_sec": 0, 00:16:07.176 "rw_mbytes_per_sec": 0, 00:16:07.176 "r_mbytes_per_sec": 0, 00:16:07.176 "w_mbytes_per_sec": 0 00:16:07.176 }, 00:16:07.176 "claimed": false, 00:16:07.176 "zoned": false, 00:16:07.176 "supported_io_types": { 00:16:07.176 "read": true, 00:16:07.176 "write": true, 00:16:07.176 "unmap": true, 00:16:07.176 "flush": false, 00:16:07.176 "reset": true, 00:16:07.176 "nvme_admin": false, 00:16:07.176 "nvme_io": false, 00:16:07.176 "nvme_io_md": false, 00:16:07.176 "write_zeroes": true, 00:16:07.176 "zcopy": false, 00:16:07.176 "get_zone_info": false, 00:16:07.176 "zone_management": false, 00:16:07.176 "zone_append": false, 00:16:07.176 "compare": false, 00:16:07.176 "compare_and_write": false, 00:16:07.176 "abort": false, 00:16:07.176 "seek_hole": true, 00:16:07.176 "seek_data": true, 00:16:07.176 "copy": false, 00:16:07.176 "nvme_iov_md": false 00:16:07.176 }, 00:16:07.176 "driver_specific": { 00:16:07.176 "lvol": { 00:16:07.176 "lvol_store_uuid": "1104e65f-3b09-4d84-93bf-2c8e3d81e413", 00:16:07.176 "base_bdev": "nvme0n1", 00:16:07.176 "thin_provision": true, 00:16:07.176 "num_allocated_clusters": 0, 00:16:07.176 "snapshot": false, 00:16:07.176 "clone": false, 00:16:07.176 "esnap_clone": false 00:16:07.176 } 00:16:07.176 } 00:16:07.176 } 00:16:07.176 ]' 00:16:07.176 17:19:50 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:07.176 17:19:50 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:16:07.176 17:19:50 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:07.176 17:19:50 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:07.176 17:19:50 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:07.176 17:19:50 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:16:07.176 17:19:50 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:07.176 17:19:50 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f0bea3aa-91c0-44b9-add4-d7107cd4ccb4 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:07.438 [2024-10-30 17:19:50.297871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.438 [2024-10-30 17:19:50.297911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:07.438 [2024-10-30 17:19:50.297925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:07.438 [2024-10-30 17:19:50.297933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.438 [2024-10-30 17:19:50.300894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.438 [2024-10-30 17:19:50.301026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:07.438 [2024-10-30 17:19:50.301048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.936 ms 00:16:07.438 [2024-10-30 17:19:50.301057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.438 [2024-10-30 17:19:50.301218] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:07.438 [2024-10-30 17:19:50.301921] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:07.438 [2024-10-30 17:19:50.301944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.438 [2024-10-30 17:19:50.301953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:07.438 [2024-10-30 17:19:50.301963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:16:07.438 [2024-10-30 17:19:50.301971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.438 [2024-10-30 17:19:50.302067] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c105cd40-d948-4956-a986-47ea9020475a 00:16:07.438 [2024-10-30 17:19:50.303073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.438 [2024-10-30 17:19:50.303105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:07.438 [2024-10-30 17:19:50.303115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:07.438 [2024-10-30 17:19:50.303124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.438 [2024-10-30 17:19:50.308004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.438 [2024-10-30 17:19:50.308117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:07.438 [2024-10-30 17:19:50.308131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.820 ms 00:16:07.438 [2024-10-30 17:19:50.308140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.438 [2024-10-30 17:19:50.308270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.438 [2024-10-30 17:19:50.308286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:07.438 [2024-10-30 17:19:50.308295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:16:07.438 [2024-10-30 17:19:50.308307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.438 [2024-10-30 17:19:50.308335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.438 [2024-10-30 17:19:50.308345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:07.438 [2024-10-30 17:19:50.308353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:07.438 [2024-10-30 17:19:50.308362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.438 [2024-10-30 17:19:50.308386] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:07.438 [2024-10-30 17:19:50.311817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.438 [2024-10-30 17:19:50.311840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:07.438 [2024-10-30 17:19:50.311851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.434 ms 00:16:07.438 [2024-10-30 17:19:50.311858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.438 [2024-10-30 17:19:50.311901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.438 [2024-10-30 17:19:50.311909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:07.438 [2024-10-30 17:19:50.311918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:07.438 [2024-10-30 17:19:50.311935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.438 [2024-10-30 17:19:50.311960] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:07.438 [2024-10-30 17:19:50.312094] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:07.438 [2024-10-30 17:19:50.312116] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:07.438 [2024-10-30 17:19:50.312127] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:07.438 [2024-10-30 17:19:50.312138] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:07.438 [2024-10-30 17:19:50.312147] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:07.438 [2024-10-30 17:19:50.312158] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:07.438 [2024-10-30 17:19:50.312165] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:07.438 [2024-10-30 17:19:50.312173] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:07.438 [2024-10-30 17:19:50.312180] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:07.438 [2024-10-30 17:19:50.312190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.438 [2024-10-30 17:19:50.312207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:07.438 [2024-10-30 17:19:50.312217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:16:07.438 [2024-10-30 17:19:50.312224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.438 [2024-10-30 17:19:50.312331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.438 [2024-10-30 17:19:50.312340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:07.438 [2024-10-30 17:19:50.312350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:07.438 [2024-10-30 17:19:50.312357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.438 [2024-10-30 17:19:50.312465] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:07.438 [2024-10-30 17:19:50.312477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:07.438 [2024-10-30 17:19:50.312489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:07.438 [2024-10-30 17:19:50.312497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:07.438 [2024-10-30 17:19:50.312506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:07.438 [2024-10-30 17:19:50.312512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:07.438 [2024-10-30 17:19:50.312521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:07.438 [2024-10-30 17:19:50.312528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:07.438 [2024-10-30 17:19:50.312536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:07.439 [2024-10-30 17:19:50.312542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:07.439 [2024-10-30 17:19:50.312550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:07.439 [2024-10-30 17:19:50.312557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:07.439 [2024-10-30 17:19:50.312565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:07.439 [2024-10-30 17:19:50.312571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:07.439 [2024-10-30 17:19:50.312579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:07.439 [2024-10-30 17:19:50.312586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:07.439 [2024-10-30 17:19:50.312595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:07.439 [2024-10-30 17:19:50.312602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:07.439 [2024-10-30 17:19:50.312610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:07.439 [2024-10-30 17:19:50.312616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:07.439 [2024-10-30 17:19:50.312626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:07.439 [2024-10-30 17:19:50.312632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:07.439 [2024-10-30 17:19:50.312640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:07.439 [2024-10-30 17:19:50.312646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:07.439 [2024-10-30 17:19:50.312654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:07.439 [2024-10-30 17:19:50.312661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:07.439 [2024-10-30 17:19:50.312668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:07.439 [2024-10-30 17:19:50.312674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:07.439 [2024-10-30 17:19:50.312682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:07.439 [2024-10-30 17:19:50.312688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:07.439 [2024-10-30 17:19:50.312696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:07.439 [2024-10-30 17:19:50.312702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:07.439 [2024-10-30 17:19:50.312712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:07.439 [2024-10-30 17:19:50.312718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:07.439 [2024-10-30 17:19:50.312726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:07.439 [2024-10-30 17:19:50.312732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:07.439 [2024-10-30 17:19:50.312740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:07.439 [2024-10-30 17:19:50.312746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:07.439 [2024-10-30 17:19:50.312754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:07.439 [2024-10-30 17:19:50.312760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:07.439 [2024-10-30 17:19:50.312768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:07.439 [2024-10-30 17:19:50.312774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:07.439 [2024-10-30 17:19:50.312782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:07.439 [2024-10-30 17:19:50.312788] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:07.439 [2024-10-30 17:19:50.312796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:07.439 [2024-10-30 17:19:50.312803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:07.439 [2024-10-30 17:19:50.312812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:07.439 [2024-10-30 17:19:50.312819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:07.439 [2024-10-30 17:19:50.312829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:07.439 [2024-10-30 17:19:50.312836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:07.439 [2024-10-30 17:19:50.312846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:07.439 [2024-10-30 17:19:50.312852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:07.439 [2024-10-30 17:19:50.312860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:07.439 [2024-10-30 17:19:50.312869] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:07.439 [2024-10-30 17:19:50.312880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:07.439 [2024-10-30 17:19:50.312888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:07.439 [2024-10-30 17:19:50.312897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:07.439 [2024-10-30 17:19:50.312905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:07.439 [2024-10-30 17:19:50.312913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:07.439 [2024-10-30 17:19:50.312920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:07.439 [2024-10-30 17:19:50.312928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:07.439 [2024-10-30 17:19:50.312935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:07.439 [2024-10-30 17:19:50.312944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:07.439 [2024-10-30 17:19:50.312950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:07.439 [2024-10-30 17:19:50.312961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:07.439 [2024-10-30 17:19:50.312968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:07.439 [2024-10-30 17:19:50.312976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:07.439 [2024-10-30 17:19:50.312983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:07.439 [2024-10-30 17:19:50.312991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:07.439 [2024-10-30 17:19:50.312998] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:07.439 [2024-10-30 17:19:50.313008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:07.439 [2024-10-30 17:19:50.313016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:07.439 [2024-10-30 17:19:50.313026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:07.439 [2024-10-30 17:19:50.313033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:07.439 [2024-10-30 17:19:50.313041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:07.439 [2024-10-30 17:19:50.313049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.439 [2024-10-30 17:19:50.313065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:07.439 [2024-10-30 17:19:50.313072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:16:07.439 [2024-10-30 17:19:50.313080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.439 [2024-10-30 17:19:50.313145] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:07.439 [2024-10-30 17:19:50.313158] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:09.986 [2024-10-30 17:19:52.946369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.986 [2024-10-30 17:19:52.946417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:09.986 [2024-10-30 17:19:52.946433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2633.213 ms 00:16:09.986 [2024-10-30 17:19:52.946444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.247 [2024-10-30 17:19:52.971379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.247 [2024-10-30 17:19:52.971418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:10.248 [2024-10-30 17:19:52.971431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.705 ms 00:16:10.248 [2024-10-30 17:19:52.971440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.248 [2024-10-30 17:19:52.971564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.248 [2024-10-30 17:19:52.971580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:10.248 [2024-10-30 17:19:52.971589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:10.248 [2024-10-30 17:19:52.971600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.248 [2024-10-30 17:19:53.009990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.248 [2024-10-30 17:19:53.010028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:10.248 [2024-10-30 17:19:53.010044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.353 ms 00:16:10.248 [2024-10-30 17:19:53.010054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.248 [2024-10-30 17:19:53.010137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.248 [2024-10-30 17:19:53.010161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:10.248 [2024-10-30 17:19:53.010173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:10.248 [2024-10-30 17:19:53.010185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.248 [2024-10-30 17:19:53.010542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.248 [2024-10-30 17:19:53.010571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:10.248 [2024-10-30 17:19:53.010584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:16:10.248 [2024-10-30 17:19:53.010597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.248 [2024-10-30 17:19:53.010745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.248 [2024-10-30 17:19:53.010912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:10.248 [2024-10-30 17:19:53.010936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:16:10.248 [2024-10-30 17:19:53.010951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.248 [2024-10-30 17:19:53.027814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.248 [2024-10-30 17:19:53.027842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:10.248 [2024-10-30 17:19:53.027852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.806 ms 00:16:10.248 [2024-10-30 17:19:53.027860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.248 [2024-10-30 17:19:53.039122] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:10.248 [2024-10-30 17:19:53.053068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.248 [2024-10-30 17:19:53.053097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:10.248 [2024-10-30 17:19:53.053109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.101 ms 00:16:10.248 [2024-10-30 17:19:53.053119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.248 [2024-10-30 17:19:53.127352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.248 [2024-10-30 17:19:53.127391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:10.248 [2024-10-30 17:19:53.127408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.171 ms 00:16:10.248 [2024-10-30 17:19:53.127423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.248 [2024-10-30 17:19:53.127749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.248 [2024-10-30 17:19:53.127770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:10.248 [2024-10-30 17:19:53.127783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:16:10.248 [2024-10-30 17:19:53.127791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.248 [2024-10-30 17:19:53.151555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.248 [2024-10-30 17:19:53.151597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:10.248 [2024-10-30 17:19:53.151613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.734 ms 00:16:10.248 [2024-10-30 17:19:53.151621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.248 [2024-10-30 17:19:53.174128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.248 [2024-10-30 17:19:53.174157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:10.248 [2024-10-30 17:19:53.174170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.447 ms 00:16:10.248 [2024-10-30 17:19:53.174178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.248 [2024-10-30 17:19:53.174771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.248 [2024-10-30 17:19:53.174791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:10.248 [2024-10-30 17:19:53.174802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:16:10.248 [2024-10-30 17:19:53.174809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.508 [2024-10-30 17:19:53.244953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.508 [2024-10-30 17:19:53.245075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:10.508 [2024-10-30 17:19:53.245141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.114 ms 00:16:10.508 [2024-10-30 17:19:53.245171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.508 [2024-10-30 17:19:53.269500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.508 [2024-10-30 17:19:53.269608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:10.508 [2024-10-30 17:19:53.269664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.220 ms 00:16:10.508 [2024-10-30 17:19:53.269692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.508 [2024-10-30 17:19:53.292988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.508 [2024-10-30 17:19:53.293093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:10.508 [2024-10-30 17:19:53.293160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.227 ms 00:16:10.508 [2024-10-30 17:19:53.293187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.508 [2024-10-30 17:19:53.316486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.508 [2024-10-30 17:19:53.316596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:10.508 [2024-10-30 17:19:53.316652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.210 ms 00:16:10.508 [2024-10-30 17:19:53.316688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.508 [2024-10-30 17:19:53.316793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.508 [2024-10-30 17:19:53.316825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:10.508 [2024-10-30 17:19:53.316924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:10.508 [2024-10-30 17:19:53.316948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.508 [2024-10-30 17:19:53.317091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.508 [2024-10-30 17:19:53.317160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:10.508 [2024-10-30 17:19:53.317229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:10.508 [2024-10-30 17:19:53.317255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.508 [2024-10-30 17:19:53.318013] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:10.508 [2024-10-30 17:19:53.321057] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3019.867 ms, result 0 00:16:10.508 [2024-10-30 17:19:53.321931] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:10.508 { 00:16:10.508 "name": "ftl0", 00:16:10.508 "uuid": "c105cd40-d948-4956-a986-47ea9020475a" 00:16:10.508 } 00:16:10.509 17:19:53 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:10.509 17:19:53 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:16:10.509 17:19:53 ftl.ftl_trim -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:10.509 17:19:53 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local i 00:16:10.509 17:19:53 ftl.ftl_trim -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:10.509 17:19:53 ftl.ftl_trim -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:10.509 17:19:53 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:10.769 17:19:53 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:10.769 [ 00:16:10.769 { 00:16:10.769 "name": "ftl0", 00:16:10.769 "aliases": [ 00:16:10.769 "c105cd40-d948-4956-a986-47ea9020475a" 00:16:10.769 ], 00:16:10.769 "product_name": "FTL disk", 00:16:10.769 "block_size": 4096, 00:16:10.769 "num_blocks": 23592960, 00:16:10.769 "uuid": "c105cd40-d948-4956-a986-47ea9020475a", 00:16:10.769 "assigned_rate_limits": { 00:16:10.769 "rw_ios_per_sec": 0, 00:16:10.769 "rw_mbytes_per_sec": 0, 00:16:10.769 "r_mbytes_per_sec": 0, 00:16:10.769 "w_mbytes_per_sec": 0 00:16:10.769 }, 00:16:10.769 "claimed": false, 00:16:10.769 "zoned": false, 00:16:10.769 "supported_io_types": { 00:16:10.769 "read": true, 00:16:10.769 "write": true, 00:16:10.769 "unmap": true, 00:16:10.769 "flush": true, 00:16:10.769 "reset": false, 00:16:10.769 "nvme_admin": false, 00:16:10.769 "nvme_io": false, 00:16:10.769 "nvme_io_md": false, 00:16:10.769 "write_zeroes": true, 00:16:10.769 "zcopy": false, 00:16:10.769 "get_zone_info": false, 00:16:10.769 "zone_management": false, 00:16:10.770 "zone_append": false, 00:16:10.770 "compare": false, 00:16:10.770 "compare_and_write": false, 00:16:10.770 "abort": false, 00:16:10.770 "seek_hole": false, 00:16:10.770 "seek_data": false, 00:16:10.770 "copy": false, 00:16:10.770 "nvme_iov_md": false 00:16:10.770 }, 00:16:10.770 "driver_specific": { 00:16:10.770 "ftl": { 00:16:10.770 "base_bdev": "f0bea3aa-91c0-44b9-add4-d7107cd4ccb4", 00:16:10.770 "cache": "nvc0n1p0" 00:16:10.770 } 00:16:10.770 } 00:16:10.770 } 00:16:10.770 ] 00:16:10.770 17:19:53 ftl.ftl_trim -- common/autotest_common.sh@909 -- # return 0 00:16:10.770 17:19:53 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:10.770 17:19:53 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:11.029 17:19:53 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:11.029 17:19:53 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:11.290 17:19:54 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:11.290 { 00:16:11.290 "name": "ftl0", 00:16:11.290 "aliases": [ 00:16:11.290 "c105cd40-d948-4956-a986-47ea9020475a" 00:16:11.290 ], 00:16:11.290 "product_name": "FTL disk", 00:16:11.290 "block_size": 4096, 00:16:11.290 "num_blocks": 23592960, 00:16:11.290 "uuid": "c105cd40-d948-4956-a986-47ea9020475a", 00:16:11.290 "assigned_rate_limits": { 00:16:11.290 "rw_ios_per_sec": 0, 00:16:11.290 "rw_mbytes_per_sec": 0, 00:16:11.290 "r_mbytes_per_sec": 0, 00:16:11.290 "w_mbytes_per_sec": 0 00:16:11.290 }, 00:16:11.290 "claimed": false, 00:16:11.290 "zoned": false, 00:16:11.290 "supported_io_types": { 00:16:11.290 "read": true, 00:16:11.290 "write": true, 00:16:11.290 "unmap": true, 00:16:11.290 "flush": true, 00:16:11.290 "reset": false, 00:16:11.290 "nvme_admin": false, 00:16:11.290 "nvme_io": false, 00:16:11.290 "nvme_io_md": false, 00:16:11.290 "write_zeroes": true, 00:16:11.290 "zcopy": false, 00:16:11.290 "get_zone_info": false, 00:16:11.290 "zone_management": false, 00:16:11.290 "zone_append": false, 00:16:11.290 "compare": false, 00:16:11.290 "compare_and_write": false, 00:16:11.290 "abort": false, 00:16:11.290 "seek_hole": false, 00:16:11.290 "seek_data": false, 00:16:11.290 "copy": false, 00:16:11.290 "nvme_iov_md": false 00:16:11.290 }, 00:16:11.290 "driver_specific": { 00:16:11.290 "ftl": { 00:16:11.290 "base_bdev": "f0bea3aa-91c0-44b9-add4-d7107cd4ccb4", 00:16:11.290 "cache": "nvc0n1p0" 00:16:11.290 } 00:16:11.290 } 00:16:11.290 } 00:16:11.290 ]' 00:16:11.290 17:19:54 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:11.290 17:19:54 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:11.290 17:19:54 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:11.553 [2024-10-30 17:19:54.353563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.553 [2024-10-30 17:19:54.353601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:11.553 [2024-10-30 17:19:54.353615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:11.553 [2024-10-30 17:19:54.353625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.553 [2024-10-30 17:19:54.353658] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:11.553 [2024-10-30 17:19:54.356312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.553 [2024-10-30 17:19:54.356340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:11.553 [2024-10-30 17:19:54.356360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.634 ms 00:16:11.553 [2024-10-30 17:19:54.356368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.553 [2024-10-30 17:19:54.356991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.553 [2024-10-30 17:19:54.357004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:11.553 [2024-10-30 17:19:54.357014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:16:11.553 [2024-10-30 17:19:54.357021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.553 [2024-10-30 17:19:54.360800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.553 [2024-10-30 17:19:54.360875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:11.553 [2024-10-30 17:19:54.360929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.745 ms 00:16:11.553 [2024-10-30 17:19:54.360958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.553 [2024-10-30 17:19:54.367999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.553 [2024-10-30 17:19:54.368102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:11.553 [2024-10-30 17:19:54.368157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.969 ms 00:16:11.553 [2024-10-30 17:19:54.368183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.553 [2024-10-30 17:19:54.392300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.553 [2024-10-30 17:19:54.392408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:11.553 [2024-10-30 17:19:54.392464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.916 ms 00:16:11.553 [2024-10-30 17:19:54.392490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.553 [2024-10-30 17:19:54.407850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.553 [2024-10-30 17:19:54.407964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:11.553 [2024-10-30 17:19:54.408023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.285 ms 00:16:11.553 [2024-10-30 17:19:54.408051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.553 [2024-10-30 17:19:54.408315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.553 [2024-10-30 17:19:54.408355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:11.553 [2024-10-30 17:19:54.408382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:16:11.553 [2024-10-30 17:19:54.408466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.553 [2024-10-30 17:19:54.431760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.553 [2024-10-30 17:19:54.431867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:11.553 [2024-10-30 17:19:54.431922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.246 ms 00:16:11.553 [2024-10-30 17:19:54.431947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.553 [2024-10-30 17:19:54.454532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.553 [2024-10-30 17:19:54.454634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:11.553 [2024-10-30 17:19:54.454689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.478 ms 00:16:11.553 [2024-10-30 17:19:54.454714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.553 [2024-10-30 17:19:54.477231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.553 [2024-10-30 17:19:54.477332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:11.553 [2024-10-30 17:19:54.477384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.442 ms 00:16:11.553 [2024-10-30 17:19:54.477409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.553 [2024-10-30 17:19:54.499755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.553 [2024-10-30 17:19:54.499859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:11.553 [2024-10-30 17:19:54.499913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.220 ms 00:16:11.553 [2024-10-30 17:19:54.499939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.553 [2024-10-30 17:19:54.500009] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:11.553 [2024-10-30 17:19:54.500042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.500990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.501053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.501089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.501121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.501179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.501230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.501266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.501320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.501383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.501442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.501477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.501510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.501664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.501699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.501734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.501766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.501939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.501974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.502008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.502037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.502105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.502134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.502164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.502192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.502293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:11.553 [2024-10-30 17:19:54.502326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.502356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.502384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.502449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.502478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.502507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.502535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.502597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.502711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.502742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.502771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.502800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.502828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.502884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.502916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.502945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.502973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.503972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.504031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.504063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.504083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.504093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.504101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.504111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.504119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.504128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.504135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.504144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:11.554 [2024-10-30 17:19:54.504159] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:11.554 [2024-10-30 17:19:54.504170] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c105cd40-d948-4956-a986-47ea9020475a 00:16:11.554 [2024-10-30 17:19:54.504181] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:11.554 [2024-10-30 17:19:54.504190] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:11.554 [2024-10-30 17:19:54.504197] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:11.554 [2024-10-30 17:19:54.504217] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:11.554 [2024-10-30 17:19:54.504224] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:11.554 [2024-10-30 17:19:54.504233] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:11.554 [2024-10-30 17:19:54.504242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:11.554 [2024-10-30 17:19:54.504250] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:11.554 [2024-10-30 17:19:54.504256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:11.554 [2024-10-30 17:19:54.504265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.554 [2024-10-30 17:19:54.504273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:11.554 [2024-10-30 17:19:54.504283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.259 ms 00:16:11.554 [2024-10-30 17:19:54.504289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.554 [2024-10-30 17:19:54.517005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.554 [2024-10-30 17:19:54.517104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:11.554 [2024-10-30 17:19:54.517163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.655 ms 00:16:11.554 [2024-10-30 17:19:54.517189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.554 [2024-10-30 17:19:54.517659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.554 [2024-10-30 17:19:54.517744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:11.554 [2024-10-30 17:19:54.517815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:16:11.554 [2024-10-30 17:19:54.517843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.816 [2024-10-30 17:19:54.562100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.816 [2024-10-30 17:19:54.562219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:11.816 [2024-10-30 17:19:54.562277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.816 [2024-10-30 17:19:54.562306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.816 [2024-10-30 17:19:54.562430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.816 [2024-10-30 17:19:54.562528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:11.816 [2024-10-30 17:19:54.562603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.816 [2024-10-30 17:19:54.562628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.816 [2024-10-30 17:19:54.562713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.816 [2024-10-30 17:19:54.562777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:11.816 [2024-10-30 17:19:54.562831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.816 [2024-10-30 17:19:54.562853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.816 [2024-10-30 17:19:54.562902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.816 [2024-10-30 17:19:54.562927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:11.816 [2024-10-30 17:19:54.562952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.816 [2024-10-30 17:19:54.563048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.816 [2024-10-30 17:19:54.644562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.816 [2024-10-30 17:19:54.644704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:11.816 [2024-10-30 17:19:54.644756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.816 [2024-10-30 17:19:54.644778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.816 [2024-10-30 17:19:54.709048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.816 [2024-10-30 17:19:54.709189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:11.816 [2024-10-30 17:19:54.709274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.816 [2024-10-30 17:19:54.709300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.816 [2024-10-30 17:19:54.709400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.816 [2024-10-30 17:19:54.709430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:11.816 [2024-10-30 17:19:54.709468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.816 [2024-10-30 17:19:54.709534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.816 [2024-10-30 17:19:54.709613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.816 [2024-10-30 17:19:54.709641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:11.816 [2024-10-30 17:19:54.709700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.816 [2024-10-30 17:19:54.709726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.816 [2024-10-30 17:19:54.709885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.816 [2024-10-30 17:19:54.709940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:11.816 [2024-10-30 17:19:54.710024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.816 [2024-10-30 17:19:54.710050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.816 [2024-10-30 17:19:54.710133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.816 [2024-10-30 17:19:54.710223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:11.816 [2024-10-30 17:19:54.710252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.816 [2024-10-30 17:19:54.710305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.816 [2024-10-30 17:19:54.710390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.816 [2024-10-30 17:19:54.710451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:11.816 [2024-10-30 17:19:54.710484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.816 [2024-10-30 17:19:54.710529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.816 [2024-10-30 17:19:54.710607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.816 [2024-10-30 17:19:54.710730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:11.816 [2024-10-30 17:19:54.710759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.816 [2024-10-30 17:19:54.710782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.816 [2024-10-30 17:19:54.711002] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 357.425 ms, result 0 00:16:11.816 true 00:16:11.816 17:19:54 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73470 00:16:11.816 17:19:54 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 73470 ']' 00:16:11.816 17:19:54 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 73470 00:16:11.816 17:19:54 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:16:11.816 17:19:54 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:11.816 17:19:54 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73470 00:16:11.816 killing process with pid 73470 00:16:11.816 17:19:54 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:11.816 17:19:54 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:11.816 17:19:54 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73470' 00:16:11.816 17:19:54 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 73470 00:16:11.816 17:19:54 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 73470 00:16:18.404 17:20:00 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:18.974 65536+0 records in 00:16:18.974 65536+0 records out 00:16:18.974 268435456 bytes (268 MB, 256 MiB) copied, 1.08607 s, 247 MB/s 00:16:18.974 17:20:01 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:19.234 [2024-10-30 17:20:02.009336] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:16:19.234 [2024-10-30 17:20:02.009459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73655 ] 00:16:19.234 [2024-10-30 17:20:02.165506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.494 [2024-10-30 17:20:02.260521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.756 [2024-10-30 17:20:02.512997] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:19.756 [2024-10-30 17:20:02.513055] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:19.756 [2024-10-30 17:20:02.672235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.756 [2024-10-30 17:20:02.672292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:19.756 [2024-10-30 17:20:02.672307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:19.757 [2024-10-30 17:20:02.672316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.757 [2024-10-30 17:20:02.675257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.757 [2024-10-30 17:20:02.675301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:19.757 [2024-10-30 17:20:02.675312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.921 ms 00:16:19.757 [2024-10-30 17:20:02.675319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.757 [2024-10-30 17:20:02.675425] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:19.757 [2024-10-30 17:20:02.676103] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:19.757 [2024-10-30 17:20:02.676133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.757 [2024-10-30 17:20:02.676142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:19.757 [2024-10-30 17:20:02.676152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:16:19.757 [2024-10-30 17:20:02.676160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.757 [2024-10-30 17:20:02.677742] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:19.757 [2024-10-30 17:20:02.691478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.757 [2024-10-30 17:20:02.691524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:19.757 [2024-10-30 17:20:02.691543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.738 ms 00:16:19.757 [2024-10-30 17:20:02.691551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.757 [2024-10-30 17:20:02.691664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.757 [2024-10-30 17:20:02.691677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:19.757 [2024-10-30 17:20:02.691686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:19.757 [2024-10-30 17:20:02.691693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.757 [2024-10-30 17:20:02.699604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.757 [2024-10-30 17:20:02.699635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:19.757 [2024-10-30 17:20:02.699644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.866 ms 00:16:19.757 [2024-10-30 17:20:02.699652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.757 [2024-10-30 17:20:02.699737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.757 [2024-10-30 17:20:02.699746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:19.757 [2024-10-30 17:20:02.699754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:19.757 [2024-10-30 17:20:02.699762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.757 [2024-10-30 17:20:02.699787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.757 [2024-10-30 17:20:02.699796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:19.757 [2024-10-30 17:20:02.699806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:19.757 [2024-10-30 17:20:02.699813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.757 [2024-10-30 17:20:02.699832] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:19.757 [2024-10-30 17:20:02.703241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.757 [2024-10-30 17:20:02.703268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:19.757 [2024-10-30 17:20:02.703277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.413 ms 00:16:19.757 [2024-10-30 17:20:02.703285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.757 [2024-10-30 17:20:02.703331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.757 [2024-10-30 17:20:02.703340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:19.757 [2024-10-30 17:20:02.703348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:19.757 [2024-10-30 17:20:02.703355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.757 [2024-10-30 17:20:02.703372] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:19.757 [2024-10-30 17:20:02.703391] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:19.757 [2024-10-30 17:20:02.703426] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:19.757 [2024-10-30 17:20:02.703441] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:19.757 [2024-10-30 17:20:02.703542] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:19.757 [2024-10-30 17:20:02.703553] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:19.757 [2024-10-30 17:20:02.703563] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:19.757 [2024-10-30 17:20:02.703573] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:19.757 [2024-10-30 17:20:02.703581] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:19.757 [2024-10-30 17:20:02.703592] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:19.757 [2024-10-30 17:20:02.703599] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:19.757 [2024-10-30 17:20:02.703606] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:19.757 [2024-10-30 17:20:02.703612] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:19.757 [2024-10-30 17:20:02.703620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.757 [2024-10-30 17:20:02.703627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:19.757 [2024-10-30 17:20:02.703634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:16:19.757 [2024-10-30 17:20:02.703641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.757 [2024-10-30 17:20:02.703729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.757 [2024-10-30 17:20:02.703736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:19.757 [2024-10-30 17:20:02.703743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:19.757 [2024-10-30 17:20:02.703753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.757 [2024-10-30 17:20:02.703860] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:19.757 [2024-10-30 17:20:02.703870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:19.757 [2024-10-30 17:20:02.703878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:19.757 [2024-10-30 17:20:02.703886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:19.757 [2024-10-30 17:20:02.703893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:19.757 [2024-10-30 17:20:02.703900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:19.757 [2024-10-30 17:20:02.703907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:19.757 [2024-10-30 17:20:02.703914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:19.757 [2024-10-30 17:20:02.703921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:19.757 [2024-10-30 17:20:02.703928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:19.757 [2024-10-30 17:20:02.703934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:19.757 [2024-10-30 17:20:02.703941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:19.757 [2024-10-30 17:20:02.703947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:19.757 [2024-10-30 17:20:02.703959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:19.757 [2024-10-30 17:20:02.703966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:19.757 [2024-10-30 17:20:02.703977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:19.757 [2024-10-30 17:20:02.703984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:19.757 [2024-10-30 17:20:02.703990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:19.757 [2024-10-30 17:20:02.703997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:19.757 [2024-10-30 17:20:02.704004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:19.757 [2024-10-30 17:20:02.704010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:19.757 [2024-10-30 17:20:02.704016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:19.757 [2024-10-30 17:20:02.704023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:19.757 [2024-10-30 17:20:02.704029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:19.757 [2024-10-30 17:20:02.704035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:19.757 [2024-10-30 17:20:02.704042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:19.757 [2024-10-30 17:20:02.704049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:19.757 [2024-10-30 17:20:02.704055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:19.757 [2024-10-30 17:20:02.704062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:19.757 [2024-10-30 17:20:02.704068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:19.757 [2024-10-30 17:20:02.704074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:19.757 [2024-10-30 17:20:02.704081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:19.757 [2024-10-30 17:20:02.704087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:19.757 [2024-10-30 17:20:02.704093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:19.757 [2024-10-30 17:20:02.704100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:19.757 [2024-10-30 17:20:02.704106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:19.757 [2024-10-30 17:20:02.704112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:19.757 [2024-10-30 17:20:02.704119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:19.757 [2024-10-30 17:20:02.704125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:19.757 [2024-10-30 17:20:02.704131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:19.757 [2024-10-30 17:20:02.704138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:19.757 [2024-10-30 17:20:02.704145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:19.757 [2024-10-30 17:20:02.704152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:19.757 [2024-10-30 17:20:02.704158] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:19.757 [2024-10-30 17:20:02.704166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:19.757 [2024-10-30 17:20:02.704172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:19.757 [2024-10-30 17:20:02.704180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:19.757 [2024-10-30 17:20:02.704192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:19.758 [2024-10-30 17:20:02.704211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:19.758 [2024-10-30 17:20:02.704218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:19.758 [2024-10-30 17:20:02.704225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:19.758 [2024-10-30 17:20:02.704232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:19.758 [2024-10-30 17:20:02.704238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:19.758 [2024-10-30 17:20:02.704246] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:19.758 [2024-10-30 17:20:02.704255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:19.758 [2024-10-30 17:20:02.704263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:19.758 [2024-10-30 17:20:02.704271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:19.758 [2024-10-30 17:20:02.704278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:19.758 [2024-10-30 17:20:02.704285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:19.758 [2024-10-30 17:20:02.704292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:19.758 [2024-10-30 17:20:02.704299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:19.758 [2024-10-30 17:20:02.704306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:19.758 [2024-10-30 17:20:02.704313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:19.758 [2024-10-30 17:20:02.704320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:19.758 [2024-10-30 17:20:02.704327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:19.758 [2024-10-30 17:20:02.704334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:19.758 [2024-10-30 17:20:02.704341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:19.758 [2024-10-30 17:20:02.704348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:19.758 [2024-10-30 17:20:02.704355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:19.758 [2024-10-30 17:20:02.704362] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:19.758 [2024-10-30 17:20:02.704370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:19.758 [2024-10-30 17:20:02.704378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:19.758 [2024-10-30 17:20:02.704386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:19.758 [2024-10-30 17:20:02.704393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:19.758 [2024-10-30 17:20:02.704400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:19.758 [2024-10-30 17:20:02.704407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.758 [2024-10-30 17:20:02.704414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:19.758 [2024-10-30 17:20:02.704421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:16:19.758 [2024-10-30 17:20:02.704431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.758 [2024-10-30 17:20:02.730639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.758 [2024-10-30 17:20:02.730768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:19.758 [2024-10-30 17:20:02.730826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.155 ms 00:16:19.758 [2024-10-30 17:20:02.730851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.758 [2024-10-30 17:20:02.730992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.758 [2024-10-30 17:20:02.731020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:19.758 [2024-10-30 17:20:02.731100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:16:19.758 [2024-10-30 17:20:02.731123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.021 [2024-10-30 17:20:02.769347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.021 [2024-10-30 17:20:02.769481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:20.021 [2024-10-30 17:20:02.769542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.187 ms 00:16:20.021 [2024-10-30 17:20:02.769565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.021 [2024-10-30 17:20:02.769671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.021 [2024-10-30 17:20:02.769700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:20.021 [2024-10-30 17:20:02.769720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:20.021 [2024-10-30 17:20:02.769808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.021 [2024-10-30 17:20:02.770154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.021 [2024-10-30 17:20:02.770217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:20.021 [2024-10-30 17:20:02.770241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:16:20.021 [2024-10-30 17:20:02.770261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.021 [2024-10-30 17:20:02.770402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.021 [2024-10-30 17:20:02.770426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:20.021 [2024-10-30 17:20:02.770446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:16:20.021 [2024-10-30 17:20:02.770464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.021 [2024-10-30 17:20:02.783730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.021 [2024-10-30 17:20:02.783835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:20.021 [2024-10-30 17:20:02.783883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.236 ms 00:16:20.021 [2024-10-30 17:20:02.783906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.021 [2024-10-30 17:20:02.796687] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:20.021 [2024-10-30 17:20:02.796805] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:20.021 [2024-10-30 17:20:02.796863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.021 [2024-10-30 17:20:02.796883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:20.021 [2024-10-30 17:20:02.796902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.848 ms 00:16:20.021 [2024-10-30 17:20:02.796920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.021 [2024-10-30 17:20:02.821520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.021 [2024-10-30 17:20:02.821647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:20.021 [2024-10-30 17:20:02.821708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.526 ms 00:16:20.021 [2024-10-30 17:20:02.821729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.021 [2024-10-30 17:20:02.834000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.021 [2024-10-30 17:20:02.834105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:20.021 [2024-10-30 17:20:02.834152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.140 ms 00:16:20.021 [2024-10-30 17:20:02.834173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.021 [2024-10-30 17:20:02.846354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.021 [2024-10-30 17:20:02.846478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:20.021 [2024-10-30 17:20:02.846533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.690 ms 00:16:20.021 [2024-10-30 17:20:02.846555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.021 [2024-10-30 17:20:02.847603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.021 [2024-10-30 17:20:02.847729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:20.021 [2024-10-30 17:20:02.847784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:16:20.021 [2024-10-30 17:20:02.847807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.021 [2024-10-30 17:20:02.905147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.021 [2024-10-30 17:20:02.905309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:20.021 [2024-10-30 17:20:02.905364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.302 ms 00:16:20.021 [2024-10-30 17:20:02.905387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.021 [2024-10-30 17:20:02.916020] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:20.021 [2024-10-30 17:20:02.930398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.021 [2024-10-30 17:20:02.930536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:20.021 [2024-10-30 17:20:02.930590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.678 ms 00:16:20.021 [2024-10-30 17:20:02.930614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.021 [2024-10-30 17:20:02.930707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.021 [2024-10-30 17:20:02.930733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:20.021 [2024-10-30 17:20:02.930757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:20.021 [2024-10-30 17:20:02.930775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.021 [2024-10-30 17:20:02.930836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.021 [2024-10-30 17:20:02.930931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:20.021 [2024-10-30 17:20:02.930951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:20.021 [2024-10-30 17:20:02.930970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.021 [2024-10-30 17:20:02.931004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.021 [2024-10-30 17:20:02.931029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:20.021 [2024-10-30 17:20:02.931094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:20.021 [2024-10-30 17:20:02.931121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.021 [2024-10-30 17:20:02.931168] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:20.021 [2024-10-30 17:20:02.931191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.021 [2024-10-30 17:20:02.931295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:20.021 [2024-10-30 17:20:02.931316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:16:20.021 [2024-10-30 17:20:02.931334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.021 [2024-10-30 17:20:02.955438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.021 [2024-10-30 17:20:02.955558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:20.021 [2024-10-30 17:20:02.955618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.033 ms 00:16:20.021 [2024-10-30 17:20:02.955640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.021 [2024-10-30 17:20:02.956125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.021 [2024-10-30 17:20:02.956259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:20.021 [2024-10-30 17:20:02.956346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:20.021 [2024-10-30 17:20:02.956370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.021 [2024-10-30 17:20:02.957269] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:20.021 [2024-10-30 17:20:02.960380] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 284.750 ms, result 0 00:16:20.021 [2024-10-30 17:20:02.961595] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:20.021 [2024-10-30 17:20:02.974544] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:21.409  [2024-10-30T17:20:05.335Z] Copying: 13/256 [MB] (13 MBps) [2024-10-30T17:20:06.279Z] Copying: 34/256 [MB] (21 MBps) [2024-10-30T17:20:07.223Z] Copying: 70/256 [MB] (36 MBps) [2024-10-30T17:20:08.167Z] Copying: 106/256 [MB] (36 MBps) [2024-10-30T17:20:09.111Z] Copying: 140/256 [MB] (33 MBps) [2024-10-30T17:20:10.056Z] Copying: 157/256 [MB] (16 MBps) [2024-10-30T17:20:11.001Z] Copying: 173/256 [MB] (16 MBps) [2024-10-30T17:20:12.389Z] Copying: 203/256 [MB] (29 MBps) [2024-10-30T17:20:13.333Z] Copying: 222/256 [MB] (18 MBps) [2024-10-30T17:20:13.907Z] Copying: 242/256 [MB] (19 MBps) [2024-10-30T17:20:13.907Z] Copying: 256/256 [MB] (average 23 MBps)[2024-10-30 17:20:13.813286] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:30.926 [2024-10-30 17:20:13.823423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.926 [2024-10-30 17:20:13.823473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:30.926 [2024-10-30 17:20:13.823490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:30.926 [2024-10-30 17:20:13.823499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.926 [2024-10-30 17:20:13.823523] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:30.926 [2024-10-30 17:20:13.826529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.926 [2024-10-30 17:20:13.826568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:30.926 [2024-10-30 17:20:13.826587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.990 ms 00:16:30.926 [2024-10-30 17:20:13.826596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.926 [2024-10-30 17:20:13.829872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.926 [2024-10-30 17:20:13.829921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:30.926 [2024-10-30 17:20:13.829933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.246 ms 00:16:30.926 [2024-10-30 17:20:13.829942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.926 [2024-10-30 17:20:13.838928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.926 [2024-10-30 17:20:13.838973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:30.926 [2024-10-30 17:20:13.838986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.967 ms 00:16:30.926 [2024-10-30 17:20:13.839001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.926 [2024-10-30 17:20:13.846015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.926 [2024-10-30 17:20:13.846067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:30.926 [2024-10-30 17:20:13.846078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.966 ms 00:16:30.926 [2024-10-30 17:20:13.846085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.926 [2024-10-30 17:20:13.871851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.926 [2024-10-30 17:20:13.871895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:30.926 [2024-10-30 17:20:13.871907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.697 ms 00:16:30.926 [2024-10-30 17:20:13.871915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.926 [2024-10-30 17:20:13.888897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.926 [2024-10-30 17:20:13.888943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:30.926 [2024-10-30 17:20:13.888955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.932 ms 00:16:30.926 [2024-10-30 17:20:13.888970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.926 [2024-10-30 17:20:13.889124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.926 [2024-10-30 17:20:13.889136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:30.926 [2024-10-30 17:20:13.889145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:16:30.927 [2024-10-30 17:20:13.889153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.188 [2024-10-30 17:20:13.914965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.188 [2024-10-30 17:20:13.915009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:31.188 [2024-10-30 17:20:13.915020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.794 ms 00:16:31.188 [2024-10-30 17:20:13.915027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.188 [2024-10-30 17:20:13.940982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.188 [2024-10-30 17:20:13.941026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:31.188 [2024-10-30 17:20:13.941039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.877 ms 00:16:31.188 [2024-10-30 17:20:13.941045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.188 [2024-10-30 17:20:13.965907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.188 [2024-10-30 17:20:13.965951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:31.188 [2024-10-30 17:20:13.965963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.800 ms 00:16:31.188 [2024-10-30 17:20:13.965971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.188 [2024-10-30 17:20:13.990901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.188 [2024-10-30 17:20:13.990944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:31.188 [2024-10-30 17:20:13.990955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.851 ms 00:16:31.188 [2024-10-30 17:20:13.990962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.188 [2024-10-30 17:20:13.991012] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:31.188 [2024-10-30 17:20:13.991028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:31.188 [2024-10-30 17:20:13.991441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:31.189 [2024-10-30 17:20:13.991836] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:31.189 [2024-10-30 17:20:13.991847] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c105cd40-d948-4956-a986-47ea9020475a 00:16:31.189 [2024-10-30 17:20:13.991855] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:31.189 [2024-10-30 17:20:13.991862] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:31.189 [2024-10-30 17:20:13.991870] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:31.189 [2024-10-30 17:20:13.991879] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:31.189 [2024-10-30 17:20:13.991887] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:31.189 [2024-10-30 17:20:13.991895] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:31.189 [2024-10-30 17:20:13.991902] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:31.189 [2024-10-30 17:20:13.991909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:31.189 [2024-10-30 17:20:13.991916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:31.189 [2024-10-30 17:20:13.991923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.189 [2024-10-30 17:20:13.991932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:31.189 [2024-10-30 17:20:13.991940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.913 ms 00:16:31.189 [2024-10-30 17:20:13.991948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.189 [2024-10-30 17:20:14.005618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.189 [2024-10-30 17:20:14.005826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:31.189 [2024-10-30 17:20:14.005846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.630 ms 00:16:31.189 [2024-10-30 17:20:14.005854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.189 [2024-10-30 17:20:14.006273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.189 [2024-10-30 17:20:14.006292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:31.189 [2024-10-30 17:20:14.006309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:16:31.189 [2024-10-30 17:20:14.006317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.189 [2024-10-30 17:20:14.045466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.189 [2024-10-30 17:20:14.045512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:31.189 [2024-10-30 17:20:14.045523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.189 [2024-10-30 17:20:14.045530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.189 [2024-10-30 17:20:14.045636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.189 [2024-10-30 17:20:14.045647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:31.189 [2024-10-30 17:20:14.045659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.189 [2024-10-30 17:20:14.045666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.189 [2024-10-30 17:20:14.045722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.189 [2024-10-30 17:20:14.045732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:31.189 [2024-10-30 17:20:14.045740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.189 [2024-10-30 17:20:14.045748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.189 [2024-10-30 17:20:14.045765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.189 [2024-10-30 17:20:14.045773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:31.189 [2024-10-30 17:20:14.045781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.189 [2024-10-30 17:20:14.045791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.189 [2024-10-30 17:20:14.132131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.189 [2024-10-30 17:20:14.132184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:31.189 [2024-10-30 17:20:14.132228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.189 [2024-10-30 17:20:14.132239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.449 [2024-10-30 17:20:14.202394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.449 [2024-10-30 17:20:14.202448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:31.449 [2024-10-30 17:20:14.202461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.449 [2024-10-30 17:20:14.202476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.449 [2024-10-30 17:20:14.202551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.449 [2024-10-30 17:20:14.202561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:31.449 [2024-10-30 17:20:14.202570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.449 [2024-10-30 17:20:14.202578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.449 [2024-10-30 17:20:14.202612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.449 [2024-10-30 17:20:14.202621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:31.449 [2024-10-30 17:20:14.202629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.449 [2024-10-30 17:20:14.202638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.449 [2024-10-30 17:20:14.202738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.449 [2024-10-30 17:20:14.202748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:31.449 [2024-10-30 17:20:14.202758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.449 [2024-10-30 17:20:14.202766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.449 [2024-10-30 17:20:14.202800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.449 [2024-10-30 17:20:14.202810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:31.449 [2024-10-30 17:20:14.202819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.449 [2024-10-30 17:20:14.202828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.449 [2024-10-30 17:20:14.202876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.449 [2024-10-30 17:20:14.202886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:31.449 [2024-10-30 17:20:14.202895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.449 [2024-10-30 17:20:14.202902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.449 [2024-10-30 17:20:14.202953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.449 [2024-10-30 17:20:14.202964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:31.449 [2024-10-30 17:20:14.202973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.449 [2024-10-30 17:20:14.202981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.449 [2024-10-30 17:20:14.203139] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 379.700 ms, result 0 00:16:32.392 00:16:32.392 00:16:32.392 17:20:15 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=73798 00:16:32.392 17:20:15 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 73798 00:16:32.392 17:20:15 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:32.392 17:20:15 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 73798 ']' 00:16:32.392 17:20:15 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.392 17:20:15 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:32.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.392 17:20:15 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.392 17:20:15 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:32.392 17:20:15 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:32.392 [2024-10-30 17:20:15.241049] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:16:32.392 [2024-10-30 17:20:15.241368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73798 ] 00:16:32.653 [2024-10-30 17:20:15.396246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.653 [2024-10-30 17:20:15.475658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.226 17:20:16 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:33.227 17:20:16 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:16:33.227 17:20:16 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:33.488 [2024-10-30 17:20:16.268977] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:33.488 [2024-10-30 17:20:16.269025] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:33.488 [2024-10-30 17:20:16.433604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.488 [2024-10-30 17:20:16.433645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:33.488 [2024-10-30 17:20:16.433659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:33.488 [2024-10-30 17:20:16.433668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.488 [2024-10-30 17:20:16.436372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.488 [2024-10-30 17:20:16.436405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:33.488 [2024-10-30 17:20:16.436416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.685 ms 00:16:33.488 [2024-10-30 17:20:16.436424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.488 [2024-10-30 17:20:16.436500] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:33.488 [2024-10-30 17:20:16.437256] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:33.488 [2024-10-30 17:20:16.437283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.488 [2024-10-30 17:20:16.437291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:33.488 [2024-10-30 17:20:16.437301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:16:33.488 [2024-10-30 17:20:16.437308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.488 [2024-10-30 17:20:16.438427] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:33.488 [2024-10-30 17:20:16.451135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.488 [2024-10-30 17:20:16.451174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:33.488 [2024-10-30 17:20:16.451185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.713 ms 00:16:33.488 [2024-10-30 17:20:16.451194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.488 [2024-10-30 17:20:16.451290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.488 [2024-10-30 17:20:16.451302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:33.488 [2024-10-30 17:20:16.451311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:33.488 [2024-10-30 17:20:16.451320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.488 [2024-10-30 17:20:16.456238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.488 [2024-10-30 17:20:16.456382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:33.488 [2024-10-30 17:20:16.456397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.873 ms 00:16:33.488 [2024-10-30 17:20:16.456407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.488 [2024-10-30 17:20:16.456502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.488 [2024-10-30 17:20:16.456514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:33.488 [2024-10-30 17:20:16.456522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:16:33.488 [2024-10-30 17:20:16.456531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.488 [2024-10-30 17:20:16.456554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.488 [2024-10-30 17:20:16.456567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:33.488 [2024-10-30 17:20:16.456574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:33.488 [2024-10-30 17:20:16.456583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.488 [2024-10-30 17:20:16.456604] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:33.488 [2024-10-30 17:20:16.459958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.488 [2024-10-30 17:20:16.460071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:33.488 [2024-10-30 17:20:16.460089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.356 ms 00:16:33.488 [2024-10-30 17:20:16.460096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.488 [2024-10-30 17:20:16.460135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.488 [2024-10-30 17:20:16.460142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:33.488 [2024-10-30 17:20:16.460152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:33.488 [2024-10-30 17:20:16.460159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.488 [2024-10-30 17:20:16.460179] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:33.488 [2024-10-30 17:20:16.460222] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:33.488 [2024-10-30 17:20:16.460263] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:33.488 [2024-10-30 17:20:16.460277] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:33.488 [2024-10-30 17:20:16.460384] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:33.488 [2024-10-30 17:20:16.460394] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:33.488 [2024-10-30 17:20:16.460406] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:33.488 [2024-10-30 17:20:16.460415] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:33.488 [2024-10-30 17:20:16.460427] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:33.488 [2024-10-30 17:20:16.460435] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:33.488 [2024-10-30 17:20:16.460444] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:33.488 [2024-10-30 17:20:16.460450] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:33.488 [2024-10-30 17:20:16.460461] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:33.488 [2024-10-30 17:20:16.460469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.488 [2024-10-30 17:20:16.460477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:33.488 [2024-10-30 17:20:16.460485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:16:33.488 [2024-10-30 17:20:16.460493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.488 [2024-10-30 17:20:16.460591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.489 [2024-10-30 17:20:16.460602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:33.489 [2024-10-30 17:20:16.460610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:33.489 [2024-10-30 17:20:16.460618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.489 [2024-10-30 17:20:16.460717] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:33.489 [2024-10-30 17:20:16.460728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:33.489 [2024-10-30 17:20:16.460736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:33.489 [2024-10-30 17:20:16.460745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.489 [2024-10-30 17:20:16.460752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:33.489 [2024-10-30 17:20:16.460760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:33.489 [2024-10-30 17:20:16.460767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:33.489 [2024-10-30 17:20:16.460779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:33.489 [2024-10-30 17:20:16.460786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:33.489 [2024-10-30 17:20:16.460794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:33.489 [2024-10-30 17:20:16.460801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:33.489 [2024-10-30 17:20:16.460809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:33.489 [2024-10-30 17:20:16.460816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:33.489 [2024-10-30 17:20:16.460824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:33.489 [2024-10-30 17:20:16.460830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:33.489 [2024-10-30 17:20:16.460838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.489 [2024-10-30 17:20:16.460845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:33.489 [2024-10-30 17:20:16.460854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:33.489 [2024-10-30 17:20:16.460860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.489 [2024-10-30 17:20:16.460869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:33.489 [2024-10-30 17:20:16.460881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:33.489 [2024-10-30 17:20:16.460889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:33.489 [2024-10-30 17:20:16.460896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:33.489 [2024-10-30 17:20:16.460905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:33.489 [2024-10-30 17:20:16.460911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:33.489 [2024-10-30 17:20:16.460919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:33.489 [2024-10-30 17:20:16.460926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:33.489 [2024-10-30 17:20:16.460934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:33.489 [2024-10-30 17:20:16.460940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:33.489 [2024-10-30 17:20:16.460948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:33.489 [2024-10-30 17:20:16.460954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:33.489 [2024-10-30 17:20:16.460963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:33.489 [2024-10-30 17:20:16.460970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:33.489 [2024-10-30 17:20:16.460977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:33.489 [2024-10-30 17:20:16.460984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:33.489 [2024-10-30 17:20:16.460992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:33.489 [2024-10-30 17:20:16.460998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:33.489 [2024-10-30 17:20:16.461006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:33.489 [2024-10-30 17:20:16.461013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:33.489 [2024-10-30 17:20:16.461022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.489 [2024-10-30 17:20:16.461028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:33.489 [2024-10-30 17:20:16.461036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:33.489 [2024-10-30 17:20:16.461042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.489 [2024-10-30 17:20:16.461051] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:33.489 [2024-10-30 17:20:16.461058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:33.489 [2024-10-30 17:20:16.461066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:33.489 [2024-10-30 17:20:16.461075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.489 [2024-10-30 17:20:16.461084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:33.489 [2024-10-30 17:20:16.461091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:33.489 [2024-10-30 17:20:16.461099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:33.489 [2024-10-30 17:20:16.461105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:33.489 [2024-10-30 17:20:16.461115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:33.489 [2024-10-30 17:20:16.461122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:33.489 [2024-10-30 17:20:16.461132] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:33.489 [2024-10-30 17:20:16.461141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:33.489 [2024-10-30 17:20:16.461153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:33.489 [2024-10-30 17:20:16.461160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:33.489 [2024-10-30 17:20:16.461169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:33.489 [2024-10-30 17:20:16.461176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:33.489 [2024-10-30 17:20:16.461185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:33.489 [2024-10-30 17:20:16.461191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:33.489 [2024-10-30 17:20:16.461211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:33.489 [2024-10-30 17:20:16.461219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:33.489 [2024-10-30 17:20:16.461228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:33.489 [2024-10-30 17:20:16.461235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:33.489 [2024-10-30 17:20:16.461244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:33.489 [2024-10-30 17:20:16.461251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:33.489 [2024-10-30 17:20:16.461260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:33.489 [2024-10-30 17:20:16.461267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:33.489 [2024-10-30 17:20:16.461276] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:33.489 [2024-10-30 17:20:16.461283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:33.489 [2024-10-30 17:20:16.461295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:33.489 [2024-10-30 17:20:16.461302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:33.489 [2024-10-30 17:20:16.461311] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:33.489 [2024-10-30 17:20:16.461319] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:33.489 [2024-10-30 17:20:16.461327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.489 [2024-10-30 17:20:16.461334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:33.489 [2024-10-30 17:20:16.461343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:16:33.489 [2024-10-30 17:20:16.461350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.751 [2024-10-30 17:20:16.487681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.751 [2024-10-30 17:20:16.487803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:33.751 [2024-10-30 17:20:16.487859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.260 ms 00:16:33.751 [2024-10-30 17:20:16.487882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.751 [2024-10-30 17:20:16.488011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.751 [2024-10-30 17:20:16.488088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:33.751 [2024-10-30 17:20:16.488115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:16:33.751 [2024-10-30 17:20:16.488133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.751 [2024-10-30 17:20:16.518647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.751 [2024-10-30 17:20:16.518759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:33.751 [2024-10-30 17:20:16.518813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.450 ms 00:16:33.751 [2024-10-30 17:20:16.518837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.751 [2024-10-30 17:20:16.518907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.751 [2024-10-30 17:20:16.518931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:33.751 [2024-10-30 17:20:16.518951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:33.751 [2024-10-30 17:20:16.518969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.751 [2024-10-30 17:20:16.519336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.751 [2024-10-30 17:20:16.519380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:33.751 [2024-10-30 17:20:16.519403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:16:33.751 [2024-10-30 17:20:16.519421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.751 [2024-10-30 17:20:16.519560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.751 [2024-10-30 17:20:16.519594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:33.751 [2024-10-30 17:20:16.519615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:16:33.751 [2024-10-30 17:20:16.519633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.751 [2024-10-30 17:20:16.534320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.751 [2024-10-30 17:20:16.534423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:33.751 [2024-10-30 17:20:16.534473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.611 ms 00:16:33.751 [2024-10-30 17:20:16.534495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.751 [2024-10-30 17:20:16.547225] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:33.751 [2024-10-30 17:20:16.547354] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:33.751 [2024-10-30 17:20:16.547417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.751 [2024-10-30 17:20:16.547439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:33.751 [2024-10-30 17:20:16.547460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.800 ms 00:16:33.751 [2024-10-30 17:20:16.547478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.751 [2024-10-30 17:20:16.572497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.751 [2024-10-30 17:20:16.572621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:33.751 [2024-10-30 17:20:16.572678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.895 ms 00:16:33.751 [2024-10-30 17:20:16.572700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.751 [2024-10-30 17:20:16.585454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.751 [2024-10-30 17:20:16.585593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:33.751 [2024-10-30 17:20:16.585657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.204 ms 00:16:33.751 [2024-10-30 17:20:16.585680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.751 [2024-10-30 17:20:16.597627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.751 [2024-10-30 17:20:16.597764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:33.751 [2024-10-30 17:20:16.597834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.811 ms 00:16:33.751 [2024-10-30 17:20:16.597877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.751 [2024-10-30 17:20:16.598589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.751 [2024-10-30 17:20:16.598686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:33.751 [2024-10-30 17:20:16.598743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:16:33.751 [2024-10-30 17:20:16.598791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.751 [2024-10-30 17:20:16.670181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.751 [2024-10-30 17:20:16.670357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:33.751 [2024-10-30 17:20:16.670425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.346 ms 00:16:33.751 [2024-10-30 17:20:16.670450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.751 [2024-10-30 17:20:16.681139] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:33.751 [2024-10-30 17:20:16.697292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.751 [2024-10-30 17:20:16.697445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:33.751 [2024-10-30 17:20:16.697499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.741 ms 00:16:33.751 [2024-10-30 17:20:16.697525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.751 [2024-10-30 17:20:16.697622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.751 [2024-10-30 17:20:16.697652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:33.751 [2024-10-30 17:20:16.697673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:33.751 [2024-10-30 17:20:16.697694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.751 [2024-10-30 17:20:16.697756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.751 [2024-10-30 17:20:16.698073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:33.751 [2024-10-30 17:20:16.698126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:33.752 [2024-10-30 17:20:16.698151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.752 [2024-10-30 17:20:16.698235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.752 [2024-10-30 17:20:16.698265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:33.752 [2024-10-30 17:20:16.698286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:33.752 [2024-10-30 17:20:16.698372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.752 [2024-10-30 17:20:16.698425] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:33.752 [2024-10-30 17:20:16.698454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.752 [2024-10-30 17:20:16.698473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:33.752 [2024-10-30 17:20:16.698495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:33.752 [2024-10-30 17:20:16.698516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.752 [2024-10-30 17:20:16.723349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.752 [2024-10-30 17:20:16.723508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:33.752 [2024-10-30 17:20:16.723574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.796 ms 00:16:33.752 [2024-10-30 17:20:16.723598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.752 [2024-10-30 17:20:16.723787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.752 [2024-10-30 17:20:16.723909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:33.752 [2024-10-30 17:20:16.724250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:16:33.752 [2024-10-30 17:20:16.724278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.752 [2024-10-30 17:20:16.725382] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:33.752 [2024-10-30 17:20:16.728898] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 291.421 ms, result 0 00:16:33.752 [2024-10-30 17:20:16.731164] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:34.013 Some configs were skipped because the RPC state that can call them passed over. 00:16:34.013 17:20:16 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:34.013 [2024-10-30 17:20:16.931435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.013 [2024-10-30 17:20:16.931563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:34.013 [2024-10-30 17:20:16.931619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.690 ms 00:16:34.013 [2024-10-30 17:20:16.931644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.013 [2024-10-30 17:20:16.931693] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.947 ms, result 0 00:16:34.013 true 00:16:34.013 17:20:16 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:34.273 [2024-10-30 17:20:17.139464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.273 [2024-10-30 17:20:17.139583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:34.273 [2024-10-30 17:20:17.139636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.473 ms 00:16:34.273 [2024-10-30 17:20:17.139658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.273 [2024-10-30 17:20:17.139709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.717 ms, result 0 00:16:34.273 true 00:16:34.273 17:20:17 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 73798 00:16:34.273 17:20:17 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 73798 ']' 00:16:34.273 17:20:17 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 73798 00:16:34.273 17:20:17 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:16:34.273 17:20:17 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:34.273 17:20:17 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73798 00:16:34.273 killing process with pid 73798 00:16:34.273 17:20:17 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:34.273 17:20:17 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:34.273 17:20:17 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73798' 00:16:34.273 17:20:17 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 73798 00:16:34.273 17:20:17 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 73798 00:16:35.301 [2024-10-30 17:20:17.926611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.301 [2024-10-30 17:20:17.926680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:35.301 [2024-10-30 17:20:17.926697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:35.301 [2024-10-30 17:20:17.926709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.301 [2024-10-30 17:20:17.926735] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:35.301 [2024-10-30 17:20:17.929774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.301 [2024-10-30 17:20:17.929830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:35.301 [2024-10-30 17:20:17.929850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.017 ms 00:16:35.301 [2024-10-30 17:20:17.929858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.301 [2024-10-30 17:20:17.930174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.301 [2024-10-30 17:20:17.930193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:35.301 [2024-10-30 17:20:17.930220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:16:35.301 [2024-10-30 17:20:17.930230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.301 [2024-10-30 17:20:17.934741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.301 [2024-10-30 17:20:17.934784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:35.301 [2024-10-30 17:20:17.934798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.486 ms 00:16:35.301 [2024-10-30 17:20:17.934809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.301 [2024-10-30 17:20:17.941982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.301 [2024-10-30 17:20:17.942028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:35.301 [2024-10-30 17:20:17.942045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.124 ms 00:16:35.301 [2024-10-30 17:20:17.942053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.301 [2024-10-30 17:20:17.959415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.301 [2024-10-30 17:20:17.959477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:35.301 [2024-10-30 17:20:17.959499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.273 ms 00:16:35.301 [2024-10-30 17:20:17.959516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.301 [2024-10-30 17:20:17.969558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.301 [2024-10-30 17:20:17.969604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:35.301 [2024-10-30 17:20:17.969619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.959 ms 00:16:35.301 [2024-10-30 17:20:17.969632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.301 [2024-10-30 17:20:17.969923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.301 [2024-10-30 17:20:17.969951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:35.301 [2024-10-30 17:20:17.969965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:16:35.301 [2024-10-30 17:20:17.969973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.301 [2024-10-30 17:20:17.981872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.301 [2024-10-30 17:20:17.981921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:35.301 [2024-10-30 17:20:17.981935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.872 ms 00:16:35.301 [2024-10-30 17:20:17.981944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.301 [2024-10-30 17:20:17.992988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.301 [2024-10-30 17:20:17.993212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:35.301 [2024-10-30 17:20:17.993245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.987 ms 00:16:35.301 [2024-10-30 17:20:17.993253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.301 [2024-10-30 17:20:18.003714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.301 [2024-10-30 17:20:18.003763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:35.301 [2024-10-30 17:20:18.003777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.402 ms 00:16:35.301 [2024-10-30 17:20:18.003785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.301 [2024-10-30 17:20:18.014176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.301 [2024-10-30 17:20:18.014364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:35.301 [2024-10-30 17:20:18.014463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.304 ms 00:16:35.301 [2024-10-30 17:20:18.014489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.301 [2024-10-30 17:20:18.014543] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:35.301 [2024-10-30 17:20:18.014573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:35.301 [2024-10-30 17:20:18.014652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:35.301 [2024-10-30 17:20:18.014683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:35.301 [2024-10-30 17:20:18.014713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:35.301 [2024-10-30 17:20:18.015601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:35.301 [2024-10-30 17:20:18.015635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:35.301 [2024-10-30 17:20:18.015644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:35.301 [2024-10-30 17:20:18.015654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.015996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:35.302 [2024-10-30 17:20:18.016494] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:35.302 [2024-10-30 17:20:18.016508] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c105cd40-d948-4956-a986-47ea9020475a 00:16:35.302 [2024-10-30 17:20:18.016524] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:35.303 [2024-10-30 17:20:18.016537] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:35.303 [2024-10-30 17:20:18.016547] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:35.303 [2024-10-30 17:20:18.016557] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:35.303 [2024-10-30 17:20:18.016564] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:35.303 [2024-10-30 17:20:18.016574] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:35.303 [2024-10-30 17:20:18.016582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:35.303 [2024-10-30 17:20:18.016590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:35.303 [2024-10-30 17:20:18.016597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:35.303 [2024-10-30 17:20:18.016610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.303 [2024-10-30 17:20:18.016618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:35.303 [2024-10-30 17:20:18.016630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.069 ms 00:16:35.303 [2024-10-30 17:20:18.016638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.303 [2024-10-30 17:20:18.030388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.303 [2024-10-30 17:20:18.030434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:35.303 [2024-10-30 17:20:18.030451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.683 ms 00:16:35.303 [2024-10-30 17:20:18.030459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.303 [2024-10-30 17:20:18.030904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.303 [2024-10-30 17:20:18.030925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:35.303 [2024-10-30 17:20:18.030937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:16:35.303 [2024-10-30 17:20:18.030945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.303 [2024-10-30 17:20:18.080467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.303 [2024-10-30 17:20:18.080515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:35.303 [2024-10-30 17:20:18.080530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.303 [2024-10-30 17:20:18.080540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.303 [2024-10-30 17:20:18.080650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.303 [2024-10-30 17:20:18.080660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:35.303 [2024-10-30 17:20:18.080671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.303 [2024-10-30 17:20:18.080680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.303 [2024-10-30 17:20:18.080735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.303 [2024-10-30 17:20:18.080745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:35.303 [2024-10-30 17:20:18.080758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.303 [2024-10-30 17:20:18.080765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.303 [2024-10-30 17:20:18.080785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.303 [2024-10-30 17:20:18.080794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:35.303 [2024-10-30 17:20:18.080805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.303 [2024-10-30 17:20:18.080813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.303 [2024-10-30 17:20:18.167354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.303 [2024-10-30 17:20:18.167413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:35.303 [2024-10-30 17:20:18.167430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.303 [2024-10-30 17:20:18.167439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.303 [2024-10-30 17:20:18.238183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.303 [2024-10-30 17:20:18.238251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:35.303 [2024-10-30 17:20:18.238267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.303 [2024-10-30 17:20:18.238275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.303 [2024-10-30 17:20:18.238353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.303 [2024-10-30 17:20:18.238366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:35.303 [2024-10-30 17:20:18.238381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.303 [2024-10-30 17:20:18.238390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.303 [2024-10-30 17:20:18.238426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.303 [2024-10-30 17:20:18.238435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:35.303 [2024-10-30 17:20:18.238446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.303 [2024-10-30 17:20:18.238453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.303 [2024-10-30 17:20:18.238552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.303 [2024-10-30 17:20:18.238563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:35.303 [2024-10-30 17:20:18.238576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.303 [2024-10-30 17:20:18.238583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.303 [2024-10-30 17:20:18.238620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.303 [2024-10-30 17:20:18.238629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:35.303 [2024-10-30 17:20:18.238640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.303 [2024-10-30 17:20:18.238647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.303 [2024-10-30 17:20:18.238694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.303 [2024-10-30 17:20:18.238703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:35.303 [2024-10-30 17:20:18.238717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.303 [2024-10-30 17:20:18.238725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.303 [2024-10-30 17:20:18.238780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.303 [2024-10-30 17:20:18.238791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:35.303 [2024-10-30 17:20:18.238801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.303 [2024-10-30 17:20:18.238809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.303 [2024-10-30 17:20:18.238972] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 312.327 ms, result 0 00:16:36.261 17:20:18 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:36.261 17:20:18 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:36.261 [2024-10-30 17:20:19.043305] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:16:36.261 [2024-10-30 17:20:19.043465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73851 ] 00:16:36.261 [2024-10-30 17:20:19.212046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.522 [2024-10-30 17:20:19.340626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.791 [2024-10-30 17:20:19.634032] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:36.791 [2024-10-30 17:20:19.634117] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:37.054 [2024-10-30 17:20:19.797255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.054 [2024-10-30 17:20:19.797324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:37.054 [2024-10-30 17:20:19.797340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:37.054 [2024-10-30 17:20:19.797350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.054 [2024-10-30 17:20:19.800323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.054 [2024-10-30 17:20:19.800378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:37.054 [2024-10-30 17:20:19.800390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.951 ms 00:16:37.054 [2024-10-30 17:20:19.800398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.054 [2024-10-30 17:20:19.800526] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:37.054 [2024-10-30 17:20:19.801706] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:37.054 [2024-10-30 17:20:19.801772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.054 [2024-10-30 17:20:19.801783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:37.054 [2024-10-30 17:20:19.801793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:16:37.054 [2024-10-30 17:20:19.801819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.054 [2024-10-30 17:20:19.803696] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:37.054 [2024-10-30 17:20:19.818190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.054 [2024-10-30 17:20:19.818252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:37.054 [2024-10-30 17:20:19.818273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.497 ms 00:16:37.054 [2024-10-30 17:20:19.818282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.054 [2024-10-30 17:20:19.818405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.054 [2024-10-30 17:20:19.818418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:37.054 [2024-10-30 17:20:19.818428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:37.054 [2024-10-30 17:20:19.818436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.054 [2024-10-30 17:20:19.826858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.054 [2024-10-30 17:20:19.826912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:37.054 [2024-10-30 17:20:19.826923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.375 ms 00:16:37.054 [2024-10-30 17:20:19.826932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.054 [2024-10-30 17:20:19.827039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.054 [2024-10-30 17:20:19.827050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:37.054 [2024-10-30 17:20:19.827060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:16:37.054 [2024-10-30 17:20:19.827067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.054 [2024-10-30 17:20:19.827094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.054 [2024-10-30 17:20:19.827105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:37.054 [2024-10-30 17:20:19.827116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:37.054 [2024-10-30 17:20:19.827125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.054 [2024-10-30 17:20:19.827148] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:37.054 [2024-10-30 17:20:19.831283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.054 [2024-10-30 17:20:19.831326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:37.054 [2024-10-30 17:20:19.831339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.142 ms 00:16:37.054 [2024-10-30 17:20:19.831347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.054 [2024-10-30 17:20:19.831430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.054 [2024-10-30 17:20:19.831441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:37.054 [2024-10-30 17:20:19.831452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:37.054 [2024-10-30 17:20:19.831461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.054 [2024-10-30 17:20:19.831484] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:37.054 [2024-10-30 17:20:19.831507] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:37.054 [2024-10-30 17:20:19.831549] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:37.054 [2024-10-30 17:20:19.831566] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:37.054 [2024-10-30 17:20:19.831672] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:37.054 [2024-10-30 17:20:19.831684] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:37.054 [2024-10-30 17:20:19.831696] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:37.054 [2024-10-30 17:20:19.831707] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:37.054 [2024-10-30 17:20:19.831717] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:37.054 [2024-10-30 17:20:19.831728] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:37.054 [2024-10-30 17:20:19.831737] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:37.054 [2024-10-30 17:20:19.831745] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:37.054 [2024-10-30 17:20:19.831753] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:37.054 [2024-10-30 17:20:19.831762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.054 [2024-10-30 17:20:19.831770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:37.054 [2024-10-30 17:20:19.831779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:16:37.054 [2024-10-30 17:20:19.831787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.054 [2024-10-30 17:20:19.831874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.054 [2024-10-30 17:20:19.831883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:37.054 [2024-10-30 17:20:19.831891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:37.054 [2024-10-30 17:20:19.831901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.054 [2024-10-30 17:20:19.832002] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:37.054 [2024-10-30 17:20:19.832013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:37.054 [2024-10-30 17:20:19.832021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:37.054 [2024-10-30 17:20:19.832029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.054 [2024-10-30 17:20:19.832037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:37.054 [2024-10-30 17:20:19.832044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:37.054 [2024-10-30 17:20:19.832051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:37.054 [2024-10-30 17:20:19.832060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:37.054 [2024-10-30 17:20:19.832068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:37.054 [2024-10-30 17:20:19.832075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:37.054 [2024-10-30 17:20:19.832083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:37.054 [2024-10-30 17:20:19.832090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:37.054 [2024-10-30 17:20:19.832097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:37.054 [2024-10-30 17:20:19.832112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:37.054 [2024-10-30 17:20:19.832120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:37.054 [2024-10-30 17:20:19.832130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.054 [2024-10-30 17:20:19.832137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:37.054 [2024-10-30 17:20:19.832144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:37.054 [2024-10-30 17:20:19.832150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.055 [2024-10-30 17:20:19.832158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:37.055 [2024-10-30 17:20:19.832165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:37.055 [2024-10-30 17:20:19.832172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:37.055 [2024-10-30 17:20:19.832179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:37.055 [2024-10-30 17:20:19.832185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:37.055 [2024-10-30 17:20:19.832192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:37.055 [2024-10-30 17:20:19.832228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:37.055 [2024-10-30 17:20:19.832236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:37.055 [2024-10-30 17:20:19.832243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:37.055 [2024-10-30 17:20:19.832251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:37.055 [2024-10-30 17:20:19.832258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:37.055 [2024-10-30 17:20:19.832265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:37.055 [2024-10-30 17:20:19.832272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:37.055 [2024-10-30 17:20:19.832279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:37.055 [2024-10-30 17:20:19.832286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:37.055 [2024-10-30 17:20:19.832293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:37.055 [2024-10-30 17:20:19.832299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:37.055 [2024-10-30 17:20:19.832306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:37.055 [2024-10-30 17:20:19.832314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:37.055 [2024-10-30 17:20:19.832321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:37.055 [2024-10-30 17:20:19.832327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.055 [2024-10-30 17:20:19.832334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:37.055 [2024-10-30 17:20:19.832341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:37.055 [2024-10-30 17:20:19.832350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.055 [2024-10-30 17:20:19.832357] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:37.055 [2024-10-30 17:20:19.832366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:37.055 [2024-10-30 17:20:19.832374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:37.055 [2024-10-30 17:20:19.832382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.055 [2024-10-30 17:20:19.832394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:37.055 [2024-10-30 17:20:19.832402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:37.055 [2024-10-30 17:20:19.832409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:37.055 [2024-10-30 17:20:19.832417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:37.055 [2024-10-30 17:20:19.832424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:37.055 [2024-10-30 17:20:19.832431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:37.055 [2024-10-30 17:20:19.832440] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:37.055 [2024-10-30 17:20:19.832449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:37.055 [2024-10-30 17:20:19.832458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:37.055 [2024-10-30 17:20:19.832465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:37.055 [2024-10-30 17:20:19.832473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:37.055 [2024-10-30 17:20:19.832481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:37.055 [2024-10-30 17:20:19.832488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:37.055 [2024-10-30 17:20:19.832496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:37.055 [2024-10-30 17:20:19.832503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:37.055 [2024-10-30 17:20:19.832510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:37.055 [2024-10-30 17:20:19.832518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:37.055 [2024-10-30 17:20:19.832525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:37.055 [2024-10-30 17:20:19.832533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:37.055 [2024-10-30 17:20:19.832540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:37.055 [2024-10-30 17:20:19.832547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:37.055 [2024-10-30 17:20:19.832555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:37.055 [2024-10-30 17:20:19.832562] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:37.055 [2024-10-30 17:20:19.832570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:37.055 [2024-10-30 17:20:19.832579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:37.055 [2024-10-30 17:20:19.832587] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:37.055 [2024-10-30 17:20:19.832594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:37.055 [2024-10-30 17:20:19.832601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:37.055 [2024-10-30 17:20:19.832608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.055 [2024-10-30 17:20:19.832616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:37.055 [2024-10-30 17:20:19.832624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:16:37.055 [2024-10-30 17:20:19.832636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.055 [2024-10-30 17:20:19.865681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.055 [2024-10-30 17:20:19.865915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:37.055 [2024-10-30 17:20:19.866119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.989 ms 00:16:37.055 [2024-10-30 17:20:19.866164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.055 [2024-10-30 17:20:19.866341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.055 [2024-10-30 17:20:19.866377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:37.055 [2024-10-30 17:20:19.866494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:37.055 [2024-10-30 17:20:19.866519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.055 [2024-10-30 17:20:19.915542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.055 [2024-10-30 17:20:19.915752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:37.055 [2024-10-30 17:20:19.915959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.981 ms 00:16:37.055 [2024-10-30 17:20:19.916003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.055 [2024-10-30 17:20:19.916146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.055 [2024-10-30 17:20:19.916767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:37.055 [2024-10-30 17:20:19.916881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:37.055 [2024-10-30 17:20:19.916910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.055 [2024-10-30 17:20:19.917533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.055 [2024-10-30 17:20:19.917674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:37.055 [2024-10-30 17:20:19.917735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:16:37.055 [2024-10-30 17:20:19.917758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.055 [2024-10-30 17:20:19.917956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.055 [2024-10-30 17:20:19.918021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:37.055 [2024-10-30 17:20:19.918127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:16:37.055 [2024-10-30 17:20:19.918154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.055 [2024-10-30 17:20:19.934930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.055 [2024-10-30 17:20:19.935109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:37.055 [2024-10-30 17:20:19.935170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.574 ms 00:16:37.055 [2024-10-30 17:20:19.935194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.055 [2024-10-30 17:20:19.949763] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:37.055 [2024-10-30 17:20:19.949976] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:37.055 [2024-10-30 17:20:19.950042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.055 [2024-10-30 17:20:19.950064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:37.055 [2024-10-30 17:20:19.950085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.682 ms 00:16:37.055 [2024-10-30 17:20:19.950104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.055 [2024-10-30 17:20:19.976394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.055 [2024-10-30 17:20:19.976602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:37.055 [2024-10-30 17:20:19.976665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.175 ms 00:16:37.055 [2024-10-30 17:20:19.976689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.055 [2024-10-30 17:20:19.990232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.055 [2024-10-30 17:20:19.990397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:37.055 [2024-10-30 17:20:19.990416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.439 ms 00:16:37.055 [2024-10-30 17:20:19.990425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.055 [2024-10-30 17:20:20.003301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.055 [2024-10-30 17:20:20.003476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:37.055 [2024-10-30 17:20:20.003497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.793 ms 00:16:37.055 [2024-10-30 17:20:20.003505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.055 [2024-10-30 17:20:20.004154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.055 [2024-10-30 17:20:20.004181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:37.055 [2024-10-30 17:20:20.004192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:16:37.055 [2024-10-30 17:20:20.004218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.315 [2024-10-30 17:20:20.074008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.315 [2024-10-30 17:20:20.074081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:37.315 [2024-10-30 17:20:20.074098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.759 ms 00:16:37.316 [2024-10-30 17:20:20.074107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.316 [2024-10-30 17:20:20.085694] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:37.316 [2024-10-30 17:20:20.106098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.316 [2024-10-30 17:20:20.106155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:37.316 [2024-10-30 17:20:20.106170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.825 ms 00:16:37.316 [2024-10-30 17:20:20.106179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.316 [2024-10-30 17:20:20.106317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.316 [2024-10-30 17:20:20.106334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:37.316 [2024-10-30 17:20:20.106344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:37.316 [2024-10-30 17:20:20.106354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.316 [2024-10-30 17:20:20.106415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.316 [2024-10-30 17:20:20.106425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:37.316 [2024-10-30 17:20:20.106434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:37.316 [2024-10-30 17:20:20.106443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.316 [2024-10-30 17:20:20.106470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.316 [2024-10-30 17:20:20.106480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:37.316 [2024-10-30 17:20:20.106491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:37.316 [2024-10-30 17:20:20.106500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.316 [2024-10-30 17:20:20.106540] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:37.316 [2024-10-30 17:20:20.106554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.316 [2024-10-30 17:20:20.106563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:37.316 [2024-10-30 17:20:20.106572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:37.316 [2024-10-30 17:20:20.106580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.316 [2024-10-30 17:20:20.133610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.316 [2024-10-30 17:20:20.133670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:37.316 [2024-10-30 17:20:20.133686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.006 ms 00:16:37.316 [2024-10-30 17:20:20.133694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.316 [2024-10-30 17:20:20.133837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.316 [2024-10-30 17:20:20.133850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:37.316 [2024-10-30 17:20:20.133860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:16:37.316 [2024-10-30 17:20:20.133868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.316 [2024-10-30 17:20:20.135037] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:37.316 [2024-10-30 17:20:20.138482] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 337.459 ms, result 0 00:16:37.316 [2024-10-30 17:20:20.139745] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:37.316 [2024-10-30 17:20:20.153095] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:38.257  [2024-10-30T17:20:22.181Z] Copying: 21/256 [MB] (21 MBps) [2024-10-30T17:20:23.567Z] Copying: 36/256 [MB] (15 MBps) [2024-10-30T17:20:24.506Z] Copying: 58/256 [MB] (22 MBps) [2024-10-30T17:20:25.447Z] Copying: 78/256 [MB] (19 MBps) [2024-10-30T17:20:26.390Z] Copying: 98/256 [MB] (20 MBps) [2024-10-30T17:20:27.334Z] Copying: 117/256 [MB] (19 MBps) [2024-10-30T17:20:28.278Z] Copying: 138/256 [MB] (20 MBps) [2024-10-30T17:20:29.221Z] Copying: 156/256 [MB] (18 MBps) [2024-10-30T17:20:30.166Z] Copying: 178/256 [MB] (21 MBps) [2024-10-30T17:20:31.554Z] Copying: 199/256 [MB] (21 MBps) [2024-10-30T17:20:32.498Z] Copying: 221/256 [MB] (21 MBps) [2024-10-30T17:20:33.070Z] Copying: 238/256 [MB] (17 MBps) [2024-10-30T17:20:33.070Z] Copying: 256/256 [MB] (average 19 MBps)[2024-10-30 17:20:33.067515] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:50.350 [2024-10-30 17:20:33.077729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.350 [2024-10-30 17:20:33.077777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:50.350 [2024-10-30 17:20:33.077793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:50.350 [2024-10-30 17:20:33.077815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.350 [2024-10-30 17:20:33.077840] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:50.350 [2024-10-30 17:20:33.080805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.350 [2024-10-30 17:20:33.080851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:50.350 [2024-10-30 17:20:33.080862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.950 ms 00:16:50.350 [2024-10-30 17:20:33.080869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.350 [2024-10-30 17:20:33.081131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.350 [2024-10-30 17:20:33.081142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:50.350 [2024-10-30 17:20:33.081151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:16:50.350 [2024-10-30 17:20:33.081159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.350 [2024-10-30 17:20:33.084869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.350 [2024-10-30 17:20:33.085045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:50.350 [2024-10-30 17:20:33.085069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.695 ms 00:16:50.350 [2024-10-30 17:20:33.085078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.350 [2024-10-30 17:20:33.092056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.350 [2024-10-30 17:20:33.092222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:50.350 [2024-10-30 17:20:33.092241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.954 ms 00:16:50.350 [2024-10-30 17:20:33.092250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.350 [2024-10-30 17:20:33.117833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.350 [2024-10-30 17:20:33.117879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:50.350 [2024-10-30 17:20:33.117891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.515 ms 00:16:50.350 [2024-10-30 17:20:33.117900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.350 [2024-10-30 17:20:33.134491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.350 [2024-10-30 17:20:33.134536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:50.350 [2024-10-30 17:20:33.134557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.544 ms 00:16:50.350 [2024-10-30 17:20:33.134566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.350 [2024-10-30 17:20:33.134719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.350 [2024-10-30 17:20:33.134731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:50.350 [2024-10-30 17:20:33.134740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:16:50.350 [2024-10-30 17:20:33.134748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.350 [2024-10-30 17:20:33.160067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.350 [2024-10-30 17:20:33.160111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:50.350 [2024-10-30 17:20:33.160122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.293 ms 00:16:50.350 [2024-10-30 17:20:33.160128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.350 [2024-10-30 17:20:33.185358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.350 [2024-10-30 17:20:33.185401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:50.350 [2024-10-30 17:20:33.185414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.164 ms 00:16:50.350 [2024-10-30 17:20:33.185420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.350 [2024-10-30 17:20:33.209832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.350 [2024-10-30 17:20:33.209875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:50.350 [2024-10-30 17:20:33.209886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.365 ms 00:16:50.351 [2024-10-30 17:20:33.209892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.351 [2024-10-30 17:20:33.234410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.351 [2024-10-30 17:20:33.234454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:50.351 [2024-10-30 17:20:33.234465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.441 ms 00:16:50.351 [2024-10-30 17:20:33.234472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.351 [2024-10-30 17:20:33.234517] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:50.351 [2024-10-30 17:20:33.234540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.234995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:50.351 [2024-10-30 17:20:33.235195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:50.352 [2024-10-30 17:20:33.235226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:50.352 [2024-10-30 17:20:33.235235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:50.352 [2024-10-30 17:20:33.235242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:50.352 [2024-10-30 17:20:33.235250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:50.352 [2024-10-30 17:20:33.235259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:50.352 [2024-10-30 17:20:33.235267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:50.352 [2024-10-30 17:20:33.235281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:50.352 [2024-10-30 17:20:33.235289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:50.352 [2024-10-30 17:20:33.235297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:50.352 [2024-10-30 17:20:33.235304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:50.352 [2024-10-30 17:20:33.235312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:50.352 [2024-10-30 17:20:33.235328] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:50.352 [2024-10-30 17:20:33.235337] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c105cd40-d948-4956-a986-47ea9020475a 00:16:50.352 [2024-10-30 17:20:33.235345] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:50.352 [2024-10-30 17:20:33.235353] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:50.352 [2024-10-30 17:20:33.235359] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:50.352 [2024-10-30 17:20:33.235367] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:50.352 [2024-10-30 17:20:33.235375] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:50.352 [2024-10-30 17:20:33.235382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:50.352 [2024-10-30 17:20:33.235390] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:50.352 [2024-10-30 17:20:33.235397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:50.352 [2024-10-30 17:20:33.235403] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:50.352 [2024-10-30 17:20:33.235410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.352 [2024-10-30 17:20:33.235418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:50.352 [2024-10-30 17:20:33.235426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.894 ms 00:16:50.352 [2024-10-30 17:20:33.235436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.352 [2024-10-30 17:20:33.248888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.352 [2024-10-30 17:20:33.248927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:50.352 [2024-10-30 17:20:33.248938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.416 ms 00:16:50.352 [2024-10-30 17:20:33.248946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.352 [2024-10-30 17:20:33.249380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.352 [2024-10-30 17:20:33.249399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:50.352 [2024-10-30 17:20:33.249409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:16:50.352 [2024-10-30 17:20:33.249416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.352 [2024-10-30 17:20:33.288102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.352 [2024-10-30 17:20:33.288149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:50.352 [2024-10-30 17:20:33.288160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.352 [2024-10-30 17:20:33.288169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.352 [2024-10-30 17:20:33.288278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.352 [2024-10-30 17:20:33.288293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:50.352 [2024-10-30 17:20:33.288302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.352 [2024-10-30 17:20:33.288310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.352 [2024-10-30 17:20:33.288376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.352 [2024-10-30 17:20:33.288386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:50.352 [2024-10-30 17:20:33.288394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.352 [2024-10-30 17:20:33.288402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.352 [2024-10-30 17:20:33.288418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.352 [2024-10-30 17:20:33.288426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:50.352 [2024-10-30 17:20:33.288437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.352 [2024-10-30 17:20:33.288445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.613 [2024-10-30 17:20:33.372056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.613 [2024-10-30 17:20:33.372327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:50.613 [2024-10-30 17:20:33.372348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.613 [2024-10-30 17:20:33.372358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.613 [2024-10-30 17:20:33.441836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.613 [2024-10-30 17:20:33.441878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:50.613 [2024-10-30 17:20:33.441896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.613 [2024-10-30 17:20:33.441905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.613 [2024-10-30 17:20:33.441961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.613 [2024-10-30 17:20:33.441972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:50.613 [2024-10-30 17:20:33.441980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.613 [2024-10-30 17:20:33.441988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.613 [2024-10-30 17:20:33.442019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.613 [2024-10-30 17:20:33.442028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:50.613 [2024-10-30 17:20:33.442036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.613 [2024-10-30 17:20:33.442044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.613 [2024-10-30 17:20:33.442143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.613 [2024-10-30 17:20:33.442153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:50.613 [2024-10-30 17:20:33.442162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.613 [2024-10-30 17:20:33.442169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.613 [2024-10-30 17:20:33.442216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.613 [2024-10-30 17:20:33.442227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:50.613 [2024-10-30 17:20:33.442235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.613 [2024-10-30 17:20:33.442243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.613 [2024-10-30 17:20:33.442288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.613 [2024-10-30 17:20:33.442298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:50.613 [2024-10-30 17:20:33.442306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.613 [2024-10-30 17:20:33.442314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.613 [2024-10-30 17:20:33.442360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.613 [2024-10-30 17:20:33.442369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:50.613 [2024-10-30 17:20:33.442378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.613 [2024-10-30 17:20:33.442385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.613 [2024-10-30 17:20:33.442537] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 364.798 ms, result 0 00:16:51.185 00:16:51.185 00:16:51.447 17:20:34 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:16:51.447 17:20:34 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:52.019 17:20:34 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:52.019 [2024-10-30 17:20:34.810527] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:16:52.019 [2024-10-30 17:20:34.810901] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74021 ] 00:16:52.019 [2024-10-30 17:20:34.977597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.280 [2024-10-30 17:20:35.099680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.541 [2024-10-30 17:20:35.386963] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:52.541 [2024-10-30 17:20:35.387042] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:52.804 [2024-10-30 17:20:35.548556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.804 [2024-10-30 17:20:35.548618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:52.804 [2024-10-30 17:20:35.548634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:52.804 [2024-10-30 17:20:35.548642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.804 [2024-10-30 17:20:35.551715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.804 [2024-10-30 17:20:35.551767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:52.804 [2024-10-30 17:20:35.551778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.051 ms 00:16:52.804 [2024-10-30 17:20:35.551787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.804 [2024-10-30 17:20:35.552167] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:52.804 [2024-10-30 17:20:35.553020] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:52.804 [2024-10-30 17:20:35.553263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.804 [2024-10-30 17:20:35.553278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:52.804 [2024-10-30 17:20:35.553288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.112 ms 00:16:52.804 [2024-10-30 17:20:35.553298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.804 [2024-10-30 17:20:35.555074] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:52.804 [2024-10-30 17:20:35.569166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.804 [2024-10-30 17:20:35.569365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:52.804 [2024-10-30 17:20:35.569394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.094 ms 00:16:52.804 [2024-10-30 17:20:35.569404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.804 [2024-10-30 17:20:35.569594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.804 [2024-10-30 17:20:35.569620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:52.804 [2024-10-30 17:20:35.569630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:52.804 [2024-10-30 17:20:35.569638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.804 [2024-10-30 17:20:35.577543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.804 [2024-10-30 17:20:35.577589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:52.804 [2024-10-30 17:20:35.577600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.857 ms 00:16:52.804 [2024-10-30 17:20:35.577607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.804 [2024-10-30 17:20:35.577712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.804 [2024-10-30 17:20:35.577722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:52.804 [2024-10-30 17:20:35.577730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:52.804 [2024-10-30 17:20:35.577738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.804 [2024-10-30 17:20:35.577766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.804 [2024-10-30 17:20:35.577774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:52.804 [2024-10-30 17:20:35.577785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:52.804 [2024-10-30 17:20:35.577793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.804 [2024-10-30 17:20:35.577827] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:52.804 [2024-10-30 17:20:35.581792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.804 [2024-10-30 17:20:35.581856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:52.804 [2024-10-30 17:20:35.581868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.970 ms 00:16:52.804 [2024-10-30 17:20:35.581875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.804 [2024-10-30 17:20:35.581947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.804 [2024-10-30 17:20:35.581958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:52.804 [2024-10-30 17:20:35.581968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:52.804 [2024-10-30 17:20:35.581975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.804 [2024-10-30 17:20:35.581994] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:52.804 [2024-10-30 17:20:35.582015] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:52.804 [2024-10-30 17:20:35.582054] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:52.804 [2024-10-30 17:20:35.582070] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:52.804 [2024-10-30 17:20:35.582177] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:52.804 [2024-10-30 17:20:35.582188] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:52.804 [2024-10-30 17:20:35.582223] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:52.804 [2024-10-30 17:20:35.582235] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:52.804 [2024-10-30 17:20:35.582244] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:52.804 [2024-10-30 17:20:35.582256] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:52.804 [2024-10-30 17:20:35.582264] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:52.804 [2024-10-30 17:20:35.582273] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:52.804 [2024-10-30 17:20:35.582280] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:52.804 [2024-10-30 17:20:35.582289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.804 [2024-10-30 17:20:35.582297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:52.804 [2024-10-30 17:20:35.582305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:16:52.804 [2024-10-30 17:20:35.582312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.804 [2024-10-30 17:20:35.582400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.804 [2024-10-30 17:20:35.582409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:52.804 [2024-10-30 17:20:35.582417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:52.804 [2024-10-30 17:20:35.582428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.804 [2024-10-30 17:20:35.582528] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:52.804 [2024-10-30 17:20:35.582539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:52.804 [2024-10-30 17:20:35.582548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:52.804 [2024-10-30 17:20:35.582557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.804 [2024-10-30 17:20:35.582565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:52.804 [2024-10-30 17:20:35.582573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:52.804 [2024-10-30 17:20:35.582579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:52.804 [2024-10-30 17:20:35.582586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:52.804 [2024-10-30 17:20:35.582593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:52.804 [2024-10-30 17:20:35.582600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:52.804 [2024-10-30 17:20:35.582607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:52.804 [2024-10-30 17:20:35.582614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:52.804 [2024-10-30 17:20:35.582620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:52.804 [2024-10-30 17:20:35.582634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:52.804 [2024-10-30 17:20:35.582641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:52.804 [2024-10-30 17:20:35.582649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.804 [2024-10-30 17:20:35.582658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:52.804 [2024-10-30 17:20:35.582665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:52.804 [2024-10-30 17:20:35.582672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.804 [2024-10-30 17:20:35.582678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:52.804 [2024-10-30 17:20:35.582685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:52.804 [2024-10-30 17:20:35.582692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:52.804 [2024-10-30 17:20:35.582700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:52.804 [2024-10-30 17:20:35.582707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:52.805 [2024-10-30 17:20:35.582714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:52.805 [2024-10-30 17:20:35.582720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:52.805 [2024-10-30 17:20:35.582727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:52.805 [2024-10-30 17:20:35.582734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:52.805 [2024-10-30 17:20:35.582742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:52.805 [2024-10-30 17:20:35.582749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:52.805 [2024-10-30 17:20:35.582755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:52.805 [2024-10-30 17:20:35.582763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:52.805 [2024-10-30 17:20:35.582769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:52.805 [2024-10-30 17:20:35.582776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:52.805 [2024-10-30 17:20:35.582783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:52.805 [2024-10-30 17:20:35.582790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:52.805 [2024-10-30 17:20:35.582796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:52.805 [2024-10-30 17:20:35.582803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:52.805 [2024-10-30 17:20:35.582810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:52.805 [2024-10-30 17:20:35.582816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.805 [2024-10-30 17:20:35.582822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:52.805 [2024-10-30 17:20:35.582829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:52.805 [2024-10-30 17:20:35.582835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.805 [2024-10-30 17:20:35.582842] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:52.805 [2024-10-30 17:20:35.582851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:52.805 [2024-10-30 17:20:35.582859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:52.805 [2024-10-30 17:20:35.582866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.805 [2024-10-30 17:20:35.582876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:52.805 [2024-10-30 17:20:35.582885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:52.805 [2024-10-30 17:20:35.582893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:52.805 [2024-10-30 17:20:35.582900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:52.805 [2024-10-30 17:20:35.582907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:52.805 [2024-10-30 17:20:35.582914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:52.805 [2024-10-30 17:20:35.582923] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:52.805 [2024-10-30 17:20:35.582933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:52.805 [2024-10-30 17:20:35.582941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:52.805 [2024-10-30 17:20:35.582949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:52.805 [2024-10-30 17:20:35.582958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:52.805 [2024-10-30 17:20:35.582964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:52.805 [2024-10-30 17:20:35.582972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:52.805 [2024-10-30 17:20:35.582980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:52.805 [2024-10-30 17:20:35.582987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:52.805 [2024-10-30 17:20:35.582994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:52.805 [2024-10-30 17:20:35.583001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:52.805 [2024-10-30 17:20:35.583008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:52.805 [2024-10-30 17:20:35.583016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:52.805 [2024-10-30 17:20:35.583023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:52.805 [2024-10-30 17:20:35.583030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:52.805 [2024-10-30 17:20:35.583037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:52.805 [2024-10-30 17:20:35.583044] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:52.805 [2024-10-30 17:20:35.583053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:52.805 [2024-10-30 17:20:35.583061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:52.805 [2024-10-30 17:20:35.583070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:52.805 [2024-10-30 17:20:35.583077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:52.805 [2024-10-30 17:20:35.583085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:52.805 [2024-10-30 17:20:35.583092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.805 [2024-10-30 17:20:35.583100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:52.805 [2024-10-30 17:20:35.583107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.633 ms 00:16:52.805 [2024-10-30 17:20:35.583118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.805 [2024-10-30 17:20:35.614929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.805 [2024-10-30 17:20:35.614978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:52.805 [2024-10-30 17:20:35.614990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.759 ms 00:16:52.805 [2024-10-30 17:20:35.615000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.805 [2024-10-30 17:20:35.615137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.805 [2024-10-30 17:20:35.615148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:52.805 [2024-10-30 17:20:35.615161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:52.805 [2024-10-30 17:20:35.615169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.805 [2024-10-30 17:20:35.660966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.805 [2024-10-30 17:20:35.661151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:52.805 [2024-10-30 17:20:35.661172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.774 ms 00:16:52.805 [2024-10-30 17:20:35.661181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.805 [2024-10-30 17:20:35.661310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.805 [2024-10-30 17:20:35.661323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:52.805 [2024-10-30 17:20:35.661332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:52.805 [2024-10-30 17:20:35.661340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.805 [2024-10-30 17:20:35.661842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.805 [2024-10-30 17:20:35.661868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:52.805 [2024-10-30 17:20:35.661879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:16:52.805 [2024-10-30 17:20:35.661887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.805 [2024-10-30 17:20:35.662034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.805 [2024-10-30 17:20:35.662047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:52.805 [2024-10-30 17:20:35.662057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:16:52.805 [2024-10-30 17:20:35.662065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.805 [2024-10-30 17:20:35.676985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.805 [2024-10-30 17:20:35.677019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:52.805 [2024-10-30 17:20:35.677030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.898 ms 00:16:52.805 [2024-10-30 17:20:35.677038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.805 [2024-10-30 17:20:35.690342] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:52.805 [2024-10-30 17:20:35.690376] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:52.805 [2024-10-30 17:20:35.690388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.805 [2024-10-30 17:20:35.690397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:52.805 [2024-10-30 17:20:35.690406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.241 ms 00:16:52.805 [2024-10-30 17:20:35.690414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.805 [2024-10-30 17:20:35.715651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.805 [2024-10-30 17:20:35.715796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:52.805 [2024-10-30 17:20:35.715813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.162 ms 00:16:52.805 [2024-10-30 17:20:35.715821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.805 [2024-10-30 17:20:35.728225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.805 [2024-10-30 17:20:35.728262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:52.805 [2024-10-30 17:20:35.728273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.332 ms 00:16:52.805 [2024-10-30 17:20:35.728280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.805 [2024-10-30 17:20:35.740521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.805 [2024-10-30 17:20:35.740561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:52.805 [2024-10-30 17:20:35.740572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.168 ms 00:16:52.805 [2024-10-30 17:20:35.740580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.805 [2024-10-30 17:20:35.741238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.805 [2024-10-30 17:20:35.741266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:52.805 [2024-10-30 17:20:35.741277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:16:52.805 [2024-10-30 17:20:35.741285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.067 [2024-10-30 17:20:35.806889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.067 [2024-10-30 17:20:35.807117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:53.067 [2024-10-30 17:20:35.807141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.577 ms 00:16:53.067 [2024-10-30 17:20:35.807150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.067 [2024-10-30 17:20:35.818094] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:53.067 [2024-10-30 17:20:35.837001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.067 [2024-10-30 17:20:35.837052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:53.067 [2024-10-30 17:20:35.837065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.731 ms 00:16:53.067 [2024-10-30 17:20:35.837074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.067 [2024-10-30 17:20:35.837169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.067 [2024-10-30 17:20:35.837183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:53.067 [2024-10-30 17:20:35.837193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:53.067 [2024-10-30 17:20:35.837235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.067 [2024-10-30 17:20:35.837295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.067 [2024-10-30 17:20:35.837305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:53.067 [2024-10-30 17:20:35.837314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:53.067 [2024-10-30 17:20:35.837323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.067 [2024-10-30 17:20:35.837361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.067 [2024-10-30 17:20:35.837371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:53.068 [2024-10-30 17:20:35.837382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:53.068 [2024-10-30 17:20:35.837390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.068 [2024-10-30 17:20:35.837427] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:53.068 [2024-10-30 17:20:35.837438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.068 [2024-10-30 17:20:35.837447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:53.068 [2024-10-30 17:20:35.837456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:53.068 [2024-10-30 17:20:35.837464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.068 [2024-10-30 17:20:35.863078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.068 [2024-10-30 17:20:35.863134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:53.068 [2024-10-30 17:20:35.863147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.592 ms 00:16:53.068 [2024-10-30 17:20:35.863156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.068 [2024-10-30 17:20:35.863311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.068 [2024-10-30 17:20:35.863325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:53.068 [2024-10-30 17:20:35.863335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:16:53.068 [2024-10-30 17:20:35.863343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.068 [2024-10-30 17:20:35.864360] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:53.068 [2024-10-30 17:20:35.867997] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 315.455 ms, result 0 00:16:53.068 [2024-10-30 17:20:35.869499] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:53.068 [2024-10-30 17:20:35.882994] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:53.329  [2024-10-30T17:20:36.310Z] Copying: 4096/4096 [kB] (average 13 MBps)[2024-10-30 17:20:36.193035] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:53.329 [2024-10-30 17:20:36.202051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.329 [2024-10-30 17:20:36.202099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:53.329 [2024-10-30 17:20:36.202111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:53.329 [2024-10-30 17:20:36.202119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.329 [2024-10-30 17:20:36.202142] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:53.329 [2024-10-30 17:20:36.205049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.329 [2024-10-30 17:20:36.205261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:53.329 [2024-10-30 17:20:36.205283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.893 ms 00:16:53.329 [2024-10-30 17:20:36.205292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.329 [2024-10-30 17:20:36.208015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.329 [2024-10-30 17:20:36.208062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:53.329 [2024-10-30 17:20:36.208073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.692 ms 00:16:53.329 [2024-10-30 17:20:36.208082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.329 [2024-10-30 17:20:36.212432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.329 [2024-10-30 17:20:36.212464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:53.329 [2024-10-30 17:20:36.212481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.334 ms 00:16:53.329 [2024-10-30 17:20:36.212489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.329 [2024-10-30 17:20:36.219439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.329 [2024-10-30 17:20:36.219607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:53.329 [2024-10-30 17:20:36.219626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.920 ms 00:16:53.329 [2024-10-30 17:20:36.219634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.329 [2024-10-30 17:20:36.244847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.329 [2024-10-30 17:20:36.244892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:53.329 [2024-10-30 17:20:36.244904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.161 ms 00:16:53.329 [2024-10-30 17:20:36.244911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.329 [2024-10-30 17:20:36.261366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.329 [2024-10-30 17:20:36.261409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:53.329 [2024-10-30 17:20:36.261428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.408 ms 00:16:53.329 [2024-10-30 17:20:36.261438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.329 [2024-10-30 17:20:36.261586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.329 [2024-10-30 17:20:36.261598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:53.329 [2024-10-30 17:20:36.261607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:16:53.329 [2024-10-30 17:20:36.261614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.329 [2024-10-30 17:20:36.286852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.329 [2024-10-30 17:20:36.287022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:53.329 [2024-10-30 17:20:36.287041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.213 ms 00:16:53.329 [2024-10-30 17:20:36.287049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.592 [2024-10-30 17:20:36.311836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.592 [2024-10-30 17:20:36.311877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:53.592 [2024-10-30 17:20:36.311888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.682 ms 00:16:53.592 [2024-10-30 17:20:36.311895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.592 [2024-10-30 17:20:36.336357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.592 [2024-10-30 17:20:36.336400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:53.592 [2024-10-30 17:20:36.336412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.415 ms 00:16:53.592 [2024-10-30 17:20:36.336419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.592 [2024-10-30 17:20:36.360349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.592 [2024-10-30 17:20:36.360392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:53.592 [2024-10-30 17:20:36.360403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.856 ms 00:16:53.592 [2024-10-30 17:20:36.360410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.592 [2024-10-30 17:20:36.360455] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:53.592 [2024-10-30 17:20:36.360476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:53.592 [2024-10-30 17:20:36.360643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.360999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:53.593 [2024-10-30 17:20:36.361269] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:53.593 [2024-10-30 17:20:36.361278] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c105cd40-d948-4956-a986-47ea9020475a 00:16:53.593 [2024-10-30 17:20:36.361287] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:53.593 [2024-10-30 17:20:36.361294] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:53.593 [2024-10-30 17:20:36.361301] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:53.593 [2024-10-30 17:20:36.361309] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:53.593 [2024-10-30 17:20:36.361332] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:53.593 [2024-10-30 17:20:36.361340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:53.593 [2024-10-30 17:20:36.361348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:53.593 [2024-10-30 17:20:36.361354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:53.593 [2024-10-30 17:20:36.361360] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:53.593 [2024-10-30 17:20:36.361367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.593 [2024-10-30 17:20:36.361376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:53.593 [2024-10-30 17:20:36.361388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:16:53.593 [2024-10-30 17:20:36.361396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.593 [2024-10-30 17:20:36.374489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.593 [2024-10-30 17:20:36.374530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:53.593 [2024-10-30 17:20:36.374542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.060 ms 00:16:53.593 [2024-10-30 17:20:36.374549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.593 [2024-10-30 17:20:36.374954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.594 [2024-10-30 17:20:36.374978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:53.594 [2024-10-30 17:20:36.374988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:16:53.594 [2024-10-30 17:20:36.374995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.594 [2024-10-30 17:20:36.413578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.594 [2024-10-30 17:20:36.413623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:53.594 [2024-10-30 17:20:36.413633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.594 [2024-10-30 17:20:36.413642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.594 [2024-10-30 17:20:36.413719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.594 [2024-10-30 17:20:36.413735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:53.594 [2024-10-30 17:20:36.413743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.594 [2024-10-30 17:20:36.413751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.594 [2024-10-30 17:20:36.413817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.594 [2024-10-30 17:20:36.413827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:53.594 [2024-10-30 17:20:36.413835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.594 [2024-10-30 17:20:36.413842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.594 [2024-10-30 17:20:36.413859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.594 [2024-10-30 17:20:36.413867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:53.594 [2024-10-30 17:20:36.413878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.594 [2024-10-30 17:20:36.413886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.594 [2024-10-30 17:20:36.496877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.594 [2024-10-30 17:20:36.497106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:53.594 [2024-10-30 17:20:36.497128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.594 [2024-10-30 17:20:36.497136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.594 [2024-10-30 17:20:36.565243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.594 [2024-10-30 17:20:36.565291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:53.594 [2024-10-30 17:20:36.565309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.594 [2024-10-30 17:20:36.565318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.594 [2024-10-30 17:20:36.565376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.594 [2024-10-30 17:20:36.565386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:53.594 [2024-10-30 17:20:36.565396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.594 [2024-10-30 17:20:36.565404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.594 [2024-10-30 17:20:36.565438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.594 [2024-10-30 17:20:36.565447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:53.594 [2024-10-30 17:20:36.565456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.594 [2024-10-30 17:20:36.565467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.594 [2024-10-30 17:20:36.565563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.594 [2024-10-30 17:20:36.565573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:53.594 [2024-10-30 17:20:36.565582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.594 [2024-10-30 17:20:36.565590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.594 [2024-10-30 17:20:36.565623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.594 [2024-10-30 17:20:36.565632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:53.594 [2024-10-30 17:20:36.565641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.594 [2024-10-30 17:20:36.565649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.594 [2024-10-30 17:20:36.565693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.594 [2024-10-30 17:20:36.565702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:53.594 [2024-10-30 17:20:36.565710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.594 [2024-10-30 17:20:36.565718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.594 [2024-10-30 17:20:36.565767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:53.594 [2024-10-30 17:20:36.565776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:53.594 [2024-10-30 17:20:36.565784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:53.594 [2024-10-30 17:20:36.565795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.594 [2024-10-30 17:20:36.565976] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 363.906 ms, result 0 00:16:54.538 00:16:54.538 00:16:54.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.538 17:20:37 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74052 00:16:54.538 17:20:37 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:54.538 17:20:37 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74052 00:16:54.538 17:20:37 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 74052 ']' 00:16:54.538 17:20:37 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.538 17:20:37 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:54.538 17:20:37 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.538 17:20:37 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:54.538 17:20:37 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:54.538 [2024-10-30 17:20:37.412922] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:16:54.538 [2024-10-30 17:20:37.413079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74052 ] 00:16:54.800 [2024-10-30 17:20:37.577940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.800 [2024-10-30 17:20:37.696111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.742 17:20:38 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:55.742 17:20:38 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:16:55.742 17:20:38 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:55.742 [2024-10-30 17:20:38.592735] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:55.742 [2024-10-30 17:20:38.592815] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:56.005 [2024-10-30 17:20:38.770636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.005 [2024-10-30 17:20:38.770696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:56.005 [2024-10-30 17:20:38.770714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:56.005 [2024-10-30 17:20:38.770723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.005 [2024-10-30 17:20:38.773790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.005 [2024-10-30 17:20:38.773853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:56.005 [2024-10-30 17:20:38.773867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.045 ms 00:16:56.005 [2024-10-30 17:20:38.773876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.005 [2024-10-30 17:20:38.773989] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:56.005 [2024-10-30 17:20:38.774759] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:56.005 [2024-10-30 17:20:38.774794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.005 [2024-10-30 17:20:38.774803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:56.005 [2024-10-30 17:20:38.774815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:16:56.005 [2024-10-30 17:20:38.774823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.005 [2024-10-30 17:20:38.776561] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:56.005 [2024-10-30 17:20:38.790574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.005 [2024-10-30 17:20:38.790638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:56.005 [2024-10-30 17:20:38.790653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.022 ms 00:16:56.005 [2024-10-30 17:20:38.790663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.005 [2024-10-30 17:20:38.790772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.005 [2024-10-30 17:20:38.790786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:56.005 [2024-10-30 17:20:38.790795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:56.005 [2024-10-30 17:20:38.790805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.005 [2024-10-30 17:20:38.798782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.005 [2024-10-30 17:20:38.798831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:56.005 [2024-10-30 17:20:38.798841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.926 ms 00:16:56.005 [2024-10-30 17:20:38.798851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.005 [2024-10-30 17:20:38.798967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.005 [2024-10-30 17:20:38.798980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:56.005 [2024-10-30 17:20:38.798988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:56.005 [2024-10-30 17:20:38.798998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.005 [2024-10-30 17:20:38.799024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.005 [2024-10-30 17:20:38.799038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:56.005 [2024-10-30 17:20:38.799046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:56.005 [2024-10-30 17:20:38.799055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.005 [2024-10-30 17:20:38.799078] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:56.005 [2024-10-30 17:20:38.803188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.005 [2024-10-30 17:20:38.803234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:56.005 [2024-10-30 17:20:38.803247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.112 ms 00:16:56.005 [2024-10-30 17:20:38.803255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.005 [2024-10-30 17:20:38.803332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.005 [2024-10-30 17:20:38.803342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:56.005 [2024-10-30 17:20:38.803354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:56.005 [2024-10-30 17:20:38.803361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.005 [2024-10-30 17:20:38.803385] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:56.005 [2024-10-30 17:20:38.803409] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:56.005 [2024-10-30 17:20:38.803455] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:56.005 [2024-10-30 17:20:38.803471] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:56.005 [2024-10-30 17:20:38.803583] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:56.005 [2024-10-30 17:20:38.803595] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:56.005 [2024-10-30 17:20:38.803608] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:56.005 [2024-10-30 17:20:38.803620] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:56.005 [2024-10-30 17:20:38.803633] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:56.005 [2024-10-30 17:20:38.803642] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:56.005 [2024-10-30 17:20:38.803651] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:56.005 [2024-10-30 17:20:38.803659] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:56.006 [2024-10-30 17:20:38.803671] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:56.006 [2024-10-30 17:20:38.803678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.006 [2024-10-30 17:20:38.803687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:56.006 [2024-10-30 17:20:38.803695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:16:56.006 [2024-10-30 17:20:38.803704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.006 [2024-10-30 17:20:38.803790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.006 [2024-10-30 17:20:38.803801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:56.006 [2024-10-30 17:20:38.803812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:56.006 [2024-10-30 17:20:38.803821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.006 [2024-10-30 17:20:38.803921] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:56.006 [2024-10-30 17:20:38.803940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:56.006 [2024-10-30 17:20:38.803949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:56.006 [2024-10-30 17:20:38.803958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.006 [2024-10-30 17:20:38.803966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:56.006 [2024-10-30 17:20:38.803974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:56.006 [2024-10-30 17:20:38.803981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:56.006 [2024-10-30 17:20:38.803993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:56.006 [2024-10-30 17:20:38.804000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:56.006 [2024-10-30 17:20:38.804009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:56.006 [2024-10-30 17:20:38.804017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:56.006 [2024-10-30 17:20:38.804025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:56.006 [2024-10-30 17:20:38.804032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:56.006 [2024-10-30 17:20:38.804040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:56.006 [2024-10-30 17:20:38.804049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:56.006 [2024-10-30 17:20:38.804058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.006 [2024-10-30 17:20:38.804065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:56.006 [2024-10-30 17:20:38.804074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:56.006 [2024-10-30 17:20:38.804081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.006 [2024-10-30 17:20:38.804090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:56.006 [2024-10-30 17:20:38.804104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:56.006 [2024-10-30 17:20:38.804112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:56.006 [2024-10-30 17:20:38.804119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:56.006 [2024-10-30 17:20:38.804130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:56.006 [2024-10-30 17:20:38.804136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:56.006 [2024-10-30 17:20:38.804144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:56.006 [2024-10-30 17:20:38.804152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:56.006 [2024-10-30 17:20:38.804160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:56.006 [2024-10-30 17:20:38.804167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:56.006 [2024-10-30 17:20:38.804175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:56.006 [2024-10-30 17:20:38.804181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:56.006 [2024-10-30 17:20:38.804191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:56.006 [2024-10-30 17:20:38.804226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:56.006 [2024-10-30 17:20:38.804236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:56.006 [2024-10-30 17:20:38.804243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:56.006 [2024-10-30 17:20:38.804252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:56.006 [2024-10-30 17:20:38.804259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:56.006 [2024-10-30 17:20:38.804269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:56.006 [2024-10-30 17:20:38.804277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:56.006 [2024-10-30 17:20:38.804288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.006 [2024-10-30 17:20:38.804296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:56.006 [2024-10-30 17:20:38.804305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:56.006 [2024-10-30 17:20:38.804311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.006 [2024-10-30 17:20:38.804320] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:56.006 [2024-10-30 17:20:38.804327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:56.006 [2024-10-30 17:20:38.804337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:56.006 [2024-10-30 17:20:38.804350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.006 [2024-10-30 17:20:38.804360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:56.006 [2024-10-30 17:20:38.804367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:56.006 [2024-10-30 17:20:38.804376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:56.006 [2024-10-30 17:20:38.804383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:56.006 [2024-10-30 17:20:38.804391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:56.006 [2024-10-30 17:20:38.804399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:56.006 [2024-10-30 17:20:38.804409] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:56.006 [2024-10-30 17:20:38.804419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:56.006 [2024-10-30 17:20:38.804432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:56.006 [2024-10-30 17:20:38.804441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:56.006 [2024-10-30 17:20:38.804450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:56.006 [2024-10-30 17:20:38.804466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:56.006 [2024-10-30 17:20:38.804475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:56.006 [2024-10-30 17:20:38.804482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:56.006 [2024-10-30 17:20:38.804492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:56.006 [2024-10-30 17:20:38.804499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:56.006 [2024-10-30 17:20:38.804508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:56.006 [2024-10-30 17:20:38.804515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:56.006 [2024-10-30 17:20:38.804524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:56.006 [2024-10-30 17:20:38.804531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:56.006 [2024-10-30 17:20:38.804540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:56.006 [2024-10-30 17:20:38.804547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:56.006 [2024-10-30 17:20:38.804557] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:56.006 [2024-10-30 17:20:38.804565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:56.006 [2024-10-30 17:20:38.804576] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:56.006 [2024-10-30 17:20:38.804583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:56.006 [2024-10-30 17:20:38.804593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:56.006 [2024-10-30 17:20:38.804601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:56.006 [2024-10-30 17:20:38.804610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.006 [2024-10-30 17:20:38.804617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:56.006 [2024-10-30 17:20:38.804627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:16:56.006 [2024-10-30 17:20:38.804635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.006 [2024-10-30 17:20:38.835989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.006 [2024-10-30 17:20:38.836038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:56.006 [2024-10-30 17:20:38.836052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.292 ms 00:16:56.006 [2024-10-30 17:20:38.836061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.006 [2024-10-30 17:20:38.836193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.006 [2024-10-30 17:20:38.836236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:56.006 [2024-10-30 17:20:38.836248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:56.006 [2024-10-30 17:20:38.836257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.006 [2024-10-30 17:20:38.870883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.006 [2024-10-30 17:20:38.870925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:56.006 [2024-10-30 17:20:38.870939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.600 ms 00:16:56.006 [2024-10-30 17:20:38.870950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.006 [2024-10-30 17:20:38.871038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.006 [2024-10-30 17:20:38.871048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:56.006 [2024-10-30 17:20:38.871059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:56.006 [2024-10-30 17:20:38.871067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.006 [2024-10-30 17:20:38.871611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.007 [2024-10-30 17:20:38.871644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:56.007 [2024-10-30 17:20:38.871656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:16:56.007 [2024-10-30 17:20:38.871666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.007 [2024-10-30 17:20:38.871812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.007 [2024-10-30 17:20:38.871829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:56.007 [2024-10-30 17:20:38.871840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:16:56.007 [2024-10-30 17:20:38.871848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.007 [2024-10-30 17:20:38.889379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.007 [2024-10-30 17:20:38.889586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:56.007 [2024-10-30 17:20:38.889610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.505 ms 00:16:56.007 [2024-10-30 17:20:38.889619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.007 [2024-10-30 17:20:38.903642] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:56.007 [2024-10-30 17:20:38.903687] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:56.007 [2024-10-30 17:20:38.903704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.007 [2024-10-30 17:20:38.903712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:56.007 [2024-10-30 17:20:38.903724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.969 ms 00:16:56.007 [2024-10-30 17:20:38.903732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.007 [2024-10-30 17:20:38.930061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.007 [2024-10-30 17:20:38.930106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:56.007 [2024-10-30 17:20:38.930120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.236 ms 00:16:56.007 [2024-10-30 17:20:38.930128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.007 [2024-10-30 17:20:38.943002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.007 [2024-10-30 17:20:38.943047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:56.007 [2024-10-30 17:20:38.943064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.755 ms 00:16:56.007 [2024-10-30 17:20:38.943071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.007 [2024-10-30 17:20:38.955880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.007 [2024-10-30 17:20:38.955923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:56.007 [2024-10-30 17:20:38.955937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.719 ms 00:16:56.007 [2024-10-30 17:20:38.955945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.007 [2024-10-30 17:20:38.956623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.007 [2024-10-30 17:20:38.956652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:56.007 [2024-10-30 17:20:38.956664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:16:56.007 [2024-10-30 17:20:38.956672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.267 [2024-10-30 17:20:39.038784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.267 [2024-10-30 17:20:39.038849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:56.267 [2024-10-30 17:20:39.038870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.080 ms 00:16:56.267 [2024-10-30 17:20:39.038880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.267 [2024-10-30 17:20:39.050260] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:56.267 [2024-10-30 17:20:39.069846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.267 [2024-10-30 17:20:39.070053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:56.267 [2024-10-30 17:20:39.070074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.865 ms 00:16:56.267 [2024-10-30 17:20:39.070086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.267 [2024-10-30 17:20:39.070189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.267 [2024-10-30 17:20:39.070236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:56.267 [2024-10-30 17:20:39.070248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:56.267 [2024-10-30 17:20:39.070258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.267 [2024-10-30 17:20:39.070316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.267 [2024-10-30 17:20:39.070328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:56.267 [2024-10-30 17:20:39.070337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:56.267 [2024-10-30 17:20:39.070347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.267 [2024-10-30 17:20:39.070375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.267 [2024-10-30 17:20:39.070386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:56.267 [2024-10-30 17:20:39.070395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:56.267 [2024-10-30 17:20:39.070408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.267 [2024-10-30 17:20:39.070446] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:56.267 [2024-10-30 17:20:39.070460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.267 [2024-10-30 17:20:39.070468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:56.267 [2024-10-30 17:20:39.070480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:56.267 [2024-10-30 17:20:39.070492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.267 [2024-10-30 17:20:39.096822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.267 [2024-10-30 17:20:39.096996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:56.267 [2024-10-30 17:20:39.097024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.299 ms 00:16:56.267 [2024-10-30 17:20:39.097034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.267 [2024-10-30 17:20:39.097156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.267 [2024-10-30 17:20:39.097169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:56.267 [2024-10-30 17:20:39.097182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:16:56.267 [2024-10-30 17:20:39.097190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.267 [2024-10-30 17:20:39.098454] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:56.267 [2024-10-30 17:20:39.101838] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 327.463 ms, result 0 00:16:56.267 [2024-10-30 17:20:39.103697] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:56.267 Some configs were skipped because the RPC state that can call them passed over. 00:16:56.267 17:20:39 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:56.528 [2024-10-30 17:20:39.348855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.528 [2024-10-30 17:20:39.349036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:56.528 [2024-10-30 17:20:39.349100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.228 ms 00:16:56.528 [2024-10-30 17:20:39.349128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.528 [2024-10-30 17:20:39.349183] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.558 ms, result 0 00:16:56.528 true 00:16:56.528 17:20:39 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:56.792 [2024-10-30 17:20:39.564820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.792 [2024-10-30 17:20:39.564997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:56.792 [2024-10-30 17:20:39.565063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.893 ms 00:16:56.792 [2024-10-30 17:20:39.565087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.792 [2024-10-30 17:20:39.565147] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.221 ms, result 0 00:16:56.792 true 00:16:56.792 17:20:39 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74052 00:16:56.792 17:20:39 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 74052 ']' 00:16:56.792 17:20:39 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 74052 00:16:56.792 17:20:39 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:16:56.792 17:20:39 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:56.792 17:20:39 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74052 00:16:56.792 17:20:39 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:56.792 17:20:39 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:56.792 killing process with pid 74052 00:16:56.792 17:20:39 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74052' 00:16:56.792 17:20:39 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 74052 00:16:56.792 17:20:39 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 74052 00:16:57.364 [2024-10-30 17:20:40.197477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-10-30 17:20:40.197519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:57.364 [2024-10-30 17:20:40.197529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:57.364 [2024-10-30 17:20:40.197537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-10-30 17:20:40.197554] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:57.364 [2024-10-30 17:20:40.199655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-10-30 17:20:40.199679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:57.364 [2024-10-30 17:20:40.199692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.088 ms 00:16:57.364 [2024-10-30 17:20:40.199698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-10-30 17:20:40.199916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-10-30 17:20:40.199923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:57.364 [2024-10-30 17:20:40.199931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:16:57.364 [2024-10-30 17:20:40.199937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-10-30 17:20:40.203171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-10-30 17:20:40.203195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:57.364 [2024-10-30 17:20:40.203215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.217 ms 00:16:57.364 [2024-10-30 17:20:40.203223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-10-30 17:20:40.208448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-10-30 17:20:40.208554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:57.364 [2024-10-30 17:20:40.208571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.195 ms 00:16:57.364 [2024-10-30 17:20:40.208577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-10-30 17:20:40.215972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-10-30 17:20:40.216068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:57.364 [2024-10-30 17:20:40.216083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.351 ms 00:16:57.364 [2024-10-30 17:20:40.216094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.364 [2024-10-30 17:20:40.222732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.364 [2024-10-30 17:20:40.222808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:57.365 [2024-10-30 17:20:40.222857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.609 ms 00:16:57.365 [2024-10-30 17:20:40.222876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.365 [2024-10-30 17:20:40.222989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.365 [2024-10-30 17:20:40.223035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:57.365 [2024-10-30 17:20:40.223083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:57.365 [2024-10-30 17:20:40.223098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.365 [2024-10-30 17:20:40.230715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.365 [2024-10-30 17:20:40.230803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:57.365 [2024-10-30 17:20:40.230862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.592 ms 00:16:57.365 [2024-10-30 17:20:40.230879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.365 [2024-10-30 17:20:40.238225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.365 [2024-10-30 17:20:40.238307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:57.365 [2024-10-30 17:20:40.238349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.309 ms 00:16:57.365 [2024-10-30 17:20:40.238366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.365 [2024-10-30 17:20:40.245724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.365 [2024-10-30 17:20:40.245970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:57.365 [2024-10-30 17:20:40.246033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.089 ms 00:16:57.365 [2024-10-30 17:20:40.246053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.365 [2024-10-30 17:20:40.253177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.365 [2024-10-30 17:20:40.253275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:57.365 [2024-10-30 17:20:40.253320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.024 ms 00:16:57.365 [2024-10-30 17:20:40.253337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.365 [2024-10-30 17:20:40.253371] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:57.365 [2024-10-30 17:20:40.253417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.253446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.253487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.253512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.253535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.253560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.253654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.253679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.253700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.253724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.253745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.253768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.253830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.253855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.253877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.253900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.253921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.253946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.253998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.254979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:57.365 [2024-10-30 17:20:40.255740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.255761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.255804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.255826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.255849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.255942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.255971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.255992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.256048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.256073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.256098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.256120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.256170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.256376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.256402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.256453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.256478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.256500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.256576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.256583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.256589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.256595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.256604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.256609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.256616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:57.366 [2024-10-30 17:20:40.256629] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:57.366 [2024-10-30 17:20:40.256638] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c105cd40-d948-4956-a986-47ea9020475a 00:16:57.366 [2024-10-30 17:20:40.256650] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:57.366 [2024-10-30 17:20:40.256659] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:57.366 [2024-10-30 17:20:40.256666] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:57.366 [2024-10-30 17:20:40.256673] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:57.366 [2024-10-30 17:20:40.256678] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:57.366 [2024-10-30 17:20:40.256685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:57.366 [2024-10-30 17:20:40.256691] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:57.366 [2024-10-30 17:20:40.256697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:57.366 [2024-10-30 17:20:40.256702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:57.366 [2024-10-30 17:20:40.256709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.366 [2024-10-30 17:20:40.256715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:57.366 [2024-10-30 17:20:40.256722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.340 ms 00:16:57.366 [2024-10-30 17:20:40.256728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.366 [2024-10-30 17:20:40.266338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.366 [2024-10-30 17:20:40.266421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:57.366 [2024-10-30 17:20:40.266436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.572 ms 00:16:57.366 [2024-10-30 17:20:40.266442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.366 [2024-10-30 17:20:40.266718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.366 [2024-10-30 17:20:40.266726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:57.366 [2024-10-30 17:20:40.266734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:16:57.366 [2024-10-30 17:20:40.266739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.366 [2024-10-30 17:20:40.301254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.366 [2024-10-30 17:20:40.301351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:57.366 [2024-10-30 17:20:40.301365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.366 [2024-10-30 17:20:40.301371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.366 [2024-10-30 17:20:40.301448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.366 [2024-10-30 17:20:40.301455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:57.366 [2024-10-30 17:20:40.301462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.366 [2024-10-30 17:20:40.301468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.366 [2024-10-30 17:20:40.301503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.366 [2024-10-30 17:20:40.301510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:57.366 [2024-10-30 17:20:40.301518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.366 [2024-10-30 17:20:40.301524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.366 [2024-10-30 17:20:40.301538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.366 [2024-10-30 17:20:40.301544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:57.366 [2024-10-30 17:20:40.301551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.366 [2024-10-30 17:20:40.301557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.627 [2024-10-30 17:20:40.360476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.627 [2024-10-30 17:20:40.360599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:57.627 [2024-10-30 17:20:40.360614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.627 [2024-10-30 17:20:40.360621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.627 [2024-10-30 17:20:40.409258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.627 [2024-10-30 17:20:40.409291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:57.627 [2024-10-30 17:20:40.409301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.627 [2024-10-30 17:20:40.409307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.627 [2024-10-30 17:20:40.409366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.628 [2024-10-30 17:20:40.409376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:57.628 [2024-10-30 17:20:40.409385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.628 [2024-10-30 17:20:40.409391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.628 [2024-10-30 17:20:40.409414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.628 [2024-10-30 17:20:40.409421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:57.628 [2024-10-30 17:20:40.409428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.628 [2024-10-30 17:20:40.409433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.628 [2024-10-30 17:20:40.409504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.628 [2024-10-30 17:20:40.409511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:57.628 [2024-10-30 17:20:40.409519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.628 [2024-10-30 17:20:40.409525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.628 [2024-10-30 17:20:40.409551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.628 [2024-10-30 17:20:40.409557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:57.628 [2024-10-30 17:20:40.409565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.628 [2024-10-30 17:20:40.409571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.628 [2024-10-30 17:20:40.409601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.628 [2024-10-30 17:20:40.409608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:57.628 [2024-10-30 17:20:40.409618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.628 [2024-10-30 17:20:40.409623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.628 [2024-10-30 17:20:40.409658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.628 [2024-10-30 17:20:40.409666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:57.628 [2024-10-30 17:20:40.409673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.628 [2024-10-30 17:20:40.409679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.628 [2024-10-30 17:20:40.409780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 212.284 ms, result 0 00:16:58.201 17:20:40 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:58.201 [2024-10-30 17:20:40.980563] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:16:58.201 [2024-10-30 17:20:40.980692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74099 ] 00:16:58.201 [2024-10-30 17:20:41.139271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.462 [2024-10-30 17:20:41.217411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.462 [2024-10-30 17:20:41.421567] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:58.462 [2024-10-30 17:20:41.421618] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:58.724 [2024-10-30 17:20:41.573627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.724 [2024-10-30 17:20:41.573664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:58.725 [2024-10-30 17:20:41.573675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:58.725 [2024-10-30 17:20:41.573681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.725 [2024-10-30 17:20:41.575735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.725 [2024-10-30 17:20:41.575765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:58.725 [2024-10-30 17:20:41.575773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.043 ms 00:16:58.725 [2024-10-30 17:20:41.575779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.725 [2024-10-30 17:20:41.575833] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:58.725 [2024-10-30 17:20:41.576351] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:58.725 [2024-10-30 17:20:41.576368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.725 [2024-10-30 17:20:41.576375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:58.725 [2024-10-30 17:20:41.576382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:16:58.725 [2024-10-30 17:20:41.576388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.725 [2024-10-30 17:20:41.577356] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:58.725 [2024-10-30 17:20:41.586914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.725 [2024-10-30 17:20:41.586941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:58.725 [2024-10-30 17:20:41.586952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.559 ms 00:16:58.725 [2024-10-30 17:20:41.586958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.725 [2024-10-30 17:20:41.587024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.725 [2024-10-30 17:20:41.587033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:58.725 [2024-10-30 17:20:41.587039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:58.725 [2024-10-30 17:20:41.587044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.725 [2024-10-30 17:20:41.591330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.725 [2024-10-30 17:20:41.591355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:58.725 [2024-10-30 17:20:41.591363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.257 ms 00:16:58.725 [2024-10-30 17:20:41.591368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.725 [2024-10-30 17:20:41.591437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.725 [2024-10-30 17:20:41.591444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:58.725 [2024-10-30 17:20:41.591450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:58.725 [2024-10-30 17:20:41.591456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.725 [2024-10-30 17:20:41.591472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.725 [2024-10-30 17:20:41.591479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:58.725 [2024-10-30 17:20:41.591486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:58.725 [2024-10-30 17:20:41.591492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.725 [2024-10-30 17:20:41.591509] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:58.725 [2024-10-30 17:20:41.594146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.725 [2024-10-30 17:20:41.594174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:58.725 [2024-10-30 17:20:41.594181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.641 ms 00:16:58.725 [2024-10-30 17:20:41.594189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.725 [2024-10-30 17:20:41.594238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.725 [2024-10-30 17:20:41.594245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:58.725 [2024-10-30 17:20:41.594251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:58.725 [2024-10-30 17:20:41.594257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.725 [2024-10-30 17:20:41.594270] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:58.725 [2024-10-30 17:20:41.594284] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:58.725 [2024-10-30 17:20:41.594311] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:58.725 [2024-10-30 17:20:41.594322] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:58.725 [2024-10-30 17:20:41.594400] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:58.725 [2024-10-30 17:20:41.594408] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:58.725 [2024-10-30 17:20:41.594416] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:58.725 [2024-10-30 17:20:41.594423] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:58.725 [2024-10-30 17:20:41.594429] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:58.725 [2024-10-30 17:20:41.594437] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:58.725 [2024-10-30 17:20:41.594443] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:58.725 [2024-10-30 17:20:41.594449] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:58.725 [2024-10-30 17:20:41.594454] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:58.725 [2024-10-30 17:20:41.594460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.725 [2024-10-30 17:20:41.594465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:58.725 [2024-10-30 17:20:41.594471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:16:58.725 [2024-10-30 17:20:41.594476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.725 [2024-10-30 17:20:41.594543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.725 [2024-10-30 17:20:41.594549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:58.725 [2024-10-30 17:20:41.594555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:58.725 [2024-10-30 17:20:41.594562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.725 [2024-10-30 17:20:41.594634] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:58.725 [2024-10-30 17:20:41.594641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:58.725 [2024-10-30 17:20:41.594647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:58.725 [2024-10-30 17:20:41.594652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.725 [2024-10-30 17:20:41.594658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:58.725 [2024-10-30 17:20:41.594663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:58.725 [2024-10-30 17:20:41.594669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:58.725 [2024-10-30 17:20:41.594674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:58.725 [2024-10-30 17:20:41.594679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:58.725 [2024-10-30 17:20:41.594684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:58.725 [2024-10-30 17:20:41.594689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:58.725 [2024-10-30 17:20:41.594694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:58.725 [2024-10-30 17:20:41.594699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:58.725 [2024-10-30 17:20:41.594709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:58.725 [2024-10-30 17:20:41.594714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:58.725 [2024-10-30 17:20:41.594719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.725 [2024-10-30 17:20:41.594726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:58.725 [2024-10-30 17:20:41.594731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:58.725 [2024-10-30 17:20:41.594736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.725 [2024-10-30 17:20:41.594741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:58.725 [2024-10-30 17:20:41.594746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:58.725 [2024-10-30 17:20:41.594751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.725 [2024-10-30 17:20:41.594756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:58.725 [2024-10-30 17:20:41.594761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:58.725 [2024-10-30 17:20:41.594766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.725 [2024-10-30 17:20:41.594771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:58.725 [2024-10-30 17:20:41.594776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:58.725 [2024-10-30 17:20:41.594781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.725 [2024-10-30 17:20:41.594786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:58.725 [2024-10-30 17:20:41.594792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:58.725 [2024-10-30 17:20:41.594797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.725 [2024-10-30 17:20:41.594803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:58.725 [2024-10-30 17:20:41.594808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:58.725 [2024-10-30 17:20:41.594812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:58.725 [2024-10-30 17:20:41.594817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:58.725 [2024-10-30 17:20:41.594822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:58.725 [2024-10-30 17:20:41.594827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:58.725 [2024-10-30 17:20:41.594832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:58.725 [2024-10-30 17:20:41.594838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:58.725 [2024-10-30 17:20:41.594843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.725 [2024-10-30 17:20:41.594848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:58.725 [2024-10-30 17:20:41.594853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:58.725 [2024-10-30 17:20:41.594858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.725 [2024-10-30 17:20:41.594862] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:58.725 [2024-10-30 17:20:41.594868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:58.725 [2024-10-30 17:20:41.594874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:58.726 [2024-10-30 17:20:41.594879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.726 [2024-10-30 17:20:41.594887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:58.726 [2024-10-30 17:20:41.594893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:58.726 [2024-10-30 17:20:41.594898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:58.726 [2024-10-30 17:20:41.594903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:58.726 [2024-10-30 17:20:41.594908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:58.726 [2024-10-30 17:20:41.594913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:58.726 [2024-10-30 17:20:41.594919] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:58.726 [2024-10-30 17:20:41.594926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:58.726 [2024-10-30 17:20:41.594932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:58.726 [2024-10-30 17:20:41.594938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:58.726 [2024-10-30 17:20:41.594944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:58.726 [2024-10-30 17:20:41.594949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:58.726 [2024-10-30 17:20:41.594954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:58.726 [2024-10-30 17:20:41.594959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:58.726 [2024-10-30 17:20:41.594965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:58.726 [2024-10-30 17:20:41.594970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:58.726 [2024-10-30 17:20:41.594975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:58.726 [2024-10-30 17:20:41.594980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:58.726 [2024-10-30 17:20:41.594986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:58.726 [2024-10-30 17:20:41.594991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:58.726 [2024-10-30 17:20:41.594996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:58.726 [2024-10-30 17:20:41.595002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:58.726 [2024-10-30 17:20:41.595007] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:58.726 [2024-10-30 17:20:41.595013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:58.726 [2024-10-30 17:20:41.595019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:58.726 [2024-10-30 17:20:41.595024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:58.726 [2024-10-30 17:20:41.595029] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:58.726 [2024-10-30 17:20:41.595035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:58.726 [2024-10-30 17:20:41.595040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.726 [2024-10-30 17:20:41.595046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:58.726 [2024-10-30 17:20:41.595051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:16:58.726 [2024-10-30 17:20:41.595059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.726 [2024-10-30 17:20:41.615617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.726 [2024-10-30 17:20:41.615644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:58.726 [2024-10-30 17:20:41.615651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.522 ms 00:16:58.726 [2024-10-30 17:20:41.615657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.726 [2024-10-30 17:20:41.615748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.726 [2024-10-30 17:20:41.615756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:58.726 [2024-10-30 17:20:41.615765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:16:58.726 [2024-10-30 17:20:41.615771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.726 [2024-10-30 17:20:41.653585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.726 [2024-10-30 17:20:41.653615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:58.726 [2024-10-30 17:20:41.653625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.798 ms 00:16:58.726 [2024-10-30 17:20:41.653631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.726 [2024-10-30 17:20:41.653690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.726 [2024-10-30 17:20:41.653698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:58.726 [2024-10-30 17:20:41.653705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:58.726 [2024-10-30 17:20:41.653710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.726 [2024-10-30 17:20:41.654012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.726 [2024-10-30 17:20:41.654029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:58.726 [2024-10-30 17:20:41.654036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:16:58.726 [2024-10-30 17:20:41.654042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.726 [2024-10-30 17:20:41.654141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.726 [2024-10-30 17:20:41.654151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:58.726 [2024-10-30 17:20:41.654157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:16:58.726 [2024-10-30 17:20:41.654163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.726 [2024-10-30 17:20:41.664793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.726 [2024-10-30 17:20:41.664818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:58.726 [2024-10-30 17:20:41.664826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.616 ms 00:16:58.726 [2024-10-30 17:20:41.664832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.726 [2024-10-30 17:20:41.674609] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:58.726 [2024-10-30 17:20:41.674726] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:58.726 [2024-10-30 17:20:41.674738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.726 [2024-10-30 17:20:41.674745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:58.726 [2024-10-30 17:20:41.674751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.833 ms 00:16:58.726 [2024-10-30 17:20:41.674757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.726 [2024-10-30 17:20:41.693159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.726 [2024-10-30 17:20:41.693192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:58.726 [2024-10-30 17:20:41.693211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.360 ms 00:16:58.726 [2024-10-30 17:20:41.693218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.726 [2024-10-30 17:20:41.702301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.726 [2024-10-30 17:20:41.702334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:58.726 [2024-10-30 17:20:41.702342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.012 ms 00:16:58.726 [2024-10-30 17:20:41.702347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.990 [2024-10-30 17:20:41.710889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.991 [2024-10-30 17:20:41.710912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:58.991 [2024-10-30 17:20:41.710919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.503 ms 00:16:58.991 [2024-10-30 17:20:41.710924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.991 [2024-10-30 17:20:41.711388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.991 [2024-10-30 17:20:41.711406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:58.991 [2024-10-30 17:20:41.711414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:16:58.991 [2024-10-30 17:20:41.711419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.991 [2024-10-30 17:20:41.755709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.991 [2024-10-30 17:20:41.755742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:58.991 [2024-10-30 17:20:41.755751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.273 ms 00:16:58.991 [2024-10-30 17:20:41.755757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.991 [2024-10-30 17:20:41.763633] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:58.991 [2024-10-30 17:20:41.774928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.991 [2024-10-30 17:20:41.774955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:58.991 [2024-10-30 17:20:41.774964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.112 ms 00:16:58.991 [2024-10-30 17:20:41.774971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.991 [2024-10-30 17:20:41.775039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.991 [2024-10-30 17:20:41.775048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:58.991 [2024-10-30 17:20:41.775055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:58.991 [2024-10-30 17:20:41.775061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.991 [2024-10-30 17:20:41.775096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.991 [2024-10-30 17:20:41.775102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:58.991 [2024-10-30 17:20:41.775109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:16:58.991 [2024-10-30 17:20:41.775114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.991 [2024-10-30 17:20:41.775135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.991 [2024-10-30 17:20:41.775141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:58.991 [2024-10-30 17:20:41.775149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:58.991 [2024-10-30 17:20:41.775155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.991 [2024-10-30 17:20:41.775177] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:58.991 [2024-10-30 17:20:41.775185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.991 [2024-10-30 17:20:41.775191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:58.991 [2024-10-30 17:20:41.775197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:58.991 [2024-10-30 17:20:41.775223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.991 [2024-10-30 17:20:41.792920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.991 [2024-10-30 17:20:41.793033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:58.991 [2024-10-30 17:20:41.793046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.679 ms 00:16:58.991 [2024-10-30 17:20:41.793052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.991 [2024-10-30 17:20:41.793120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.991 [2024-10-30 17:20:41.793128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:58.991 [2024-10-30 17:20:41.793135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:58.991 [2024-10-30 17:20:41.793140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.991 [2024-10-30 17:20:41.793752] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:58.991 [2024-10-30 17:20:41.795990] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 219.904 ms, result 0 00:16:58.991 [2024-10-30 17:20:41.796651] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:58.991 [2024-10-30 17:20:41.811429] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:59.933  [2024-10-30T17:20:44.301Z] Copying: 22/256 [MB] (22 MBps) [2024-10-30T17:20:44.873Z] Copying: 39/256 [MB] (16 MBps) [2024-10-30T17:20:46.261Z] Copying: 55/256 [MB] (16 MBps) [2024-10-30T17:20:47.237Z] Copying: 74/256 [MB] (19 MBps) [2024-10-30T17:20:48.211Z] Copying: 93/256 [MB] (19 MBps) [2024-10-30T17:20:49.154Z] Copying: 110/256 [MB] (16 MBps) [2024-10-30T17:20:50.097Z] Copying: 129/256 [MB] (19 MBps) [2024-10-30T17:20:51.040Z] Copying: 149/256 [MB] (19 MBps) [2024-10-30T17:20:51.985Z] Copying: 172/256 [MB] (22 MBps) [2024-10-30T17:20:52.930Z] Copying: 193/256 [MB] (20 MBps) [2024-10-30T17:20:53.873Z] Copying: 214/256 [MB] (21 MBps) [2024-10-30T17:20:55.260Z] Copying: 234/256 [MB] (19 MBps) [2024-10-30T17:20:55.260Z] Copying: 253/256 [MB] (19 MBps) [2024-10-30T17:20:55.260Z] Copying: 256/256 [MB] (average 19 MBps)[2024-10-30 17:20:55.038298] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:12.279 [2024-10-30 17:20:55.051251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.279 [2024-10-30 17:20:55.051307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:12.279 [2024-10-30 17:20:55.051326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:12.279 [2024-10-30 17:20:55.051336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.279 [2024-10-30 17:20:55.051365] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:12.279 [2024-10-30 17:20:55.054924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.279 [2024-10-30 17:20:55.054974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:12.279 [2024-10-30 17:20:55.054986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.540 ms 00:17:12.279 [2024-10-30 17:20:55.054994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.279 [2024-10-30 17:20:55.055282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.279 [2024-10-30 17:20:55.055293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:12.279 [2024-10-30 17:20:55.055303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:17:12.279 [2024-10-30 17:20:55.055311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.279 [2024-10-30 17:20:55.059008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.279 [2024-10-30 17:20:55.059034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:12.279 [2024-10-30 17:20:55.059048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.681 ms 00:17:12.279 [2024-10-30 17:20:55.059057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.279 [2024-10-30 17:20:55.066028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.279 [2024-10-30 17:20:55.066072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:12.279 [2024-10-30 17:20:55.066083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.951 ms 00:17:12.279 [2024-10-30 17:20:55.066093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.279 [2024-10-30 17:20:55.091957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.279 [2024-10-30 17:20:55.092006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:12.279 [2024-10-30 17:20:55.092020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.794 ms 00:17:12.280 [2024-10-30 17:20:55.092028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.280 [2024-10-30 17:20:55.109515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.280 [2024-10-30 17:20:55.109564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:12.280 [2024-10-30 17:20:55.109585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.436 ms 00:17:12.280 [2024-10-30 17:20:55.109593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.280 [2024-10-30 17:20:55.109753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.280 [2024-10-30 17:20:55.109764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:12.280 [2024-10-30 17:20:55.109774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:17:12.280 [2024-10-30 17:20:55.109781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.280 [2024-10-30 17:20:55.135849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.280 [2024-10-30 17:20:55.136054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:12.280 [2024-10-30 17:20:55.136076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.041 ms 00:17:12.280 [2024-10-30 17:20:55.136083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.280 [2024-10-30 17:20:55.162318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.280 [2024-10-30 17:20:55.162506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:12.280 [2024-10-30 17:20:55.162527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.069 ms 00:17:12.280 [2024-10-30 17:20:55.162535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.280 [2024-10-30 17:20:55.187662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.280 [2024-10-30 17:20:55.187712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:12.280 [2024-10-30 17:20:55.187724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.891 ms 00:17:12.280 [2024-10-30 17:20:55.187732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.280 [2024-10-30 17:20:55.212502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.280 [2024-10-30 17:20:55.212548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:12.280 [2024-10-30 17:20:55.212560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.689 ms 00:17:12.280 [2024-10-30 17:20:55.212567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.280 [2024-10-30 17:20:55.212616] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:12.280 [2024-10-30 17:20:55.212637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.212996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:12.280 [2024-10-30 17:20:55.213178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:12.281 [2024-10-30 17:20:55.213469] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:12.281 [2024-10-30 17:20:55.213478] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c105cd40-d948-4956-a986-47ea9020475a 00:17:12.281 [2024-10-30 17:20:55.213486] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:12.281 [2024-10-30 17:20:55.213494] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:12.281 [2024-10-30 17:20:55.213501] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:12.281 [2024-10-30 17:20:55.213509] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:12.281 [2024-10-30 17:20:55.213516] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:12.281 [2024-10-30 17:20:55.213525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:12.281 [2024-10-30 17:20:55.213531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:12.281 [2024-10-30 17:20:55.213538] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:12.281 [2024-10-30 17:20:55.213544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:12.281 [2024-10-30 17:20:55.213552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.281 [2024-10-30 17:20:55.213560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:12.281 [2024-10-30 17:20:55.213569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:17:12.281 [2024-10-30 17:20:55.213580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.281 [2024-10-30 17:20:55.227222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.281 [2024-10-30 17:20:55.227262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:12.281 [2024-10-30 17:20:55.227273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.605 ms 00:17:12.281 [2024-10-30 17:20:55.227281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.281 [2024-10-30 17:20:55.227679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.281 [2024-10-30 17:20:55.227703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:12.281 [2024-10-30 17:20:55.227712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:17:12.281 [2024-10-30 17:20:55.227720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.542 [2024-10-30 17:20:55.266898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.542 [2024-10-30 17:20:55.266949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:12.542 [2024-10-30 17:20:55.266962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.542 [2024-10-30 17:20:55.266971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.542 [2024-10-30 17:20:55.267080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.542 [2024-10-30 17:20:55.267094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:12.542 [2024-10-30 17:20:55.267103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.542 [2024-10-30 17:20:55.267112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.542 [2024-10-30 17:20:55.267172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.542 [2024-10-30 17:20:55.267182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:12.542 [2024-10-30 17:20:55.267191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.542 [2024-10-30 17:20:55.267225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.542 [2024-10-30 17:20:55.267244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.542 [2024-10-30 17:20:55.267254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:12.542 [2024-10-30 17:20:55.267267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.542 [2024-10-30 17:20:55.267274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.542 [2024-10-30 17:20:55.351916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.542 [2024-10-30 17:20:55.351977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:12.542 [2024-10-30 17:20:55.351993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.542 [2024-10-30 17:20:55.352001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.542 [2024-10-30 17:20:55.420706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.542 [2024-10-30 17:20:55.420762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:12.542 [2024-10-30 17:20:55.420782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.542 [2024-10-30 17:20:55.420791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.542 [2024-10-30 17:20:55.420873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.542 [2024-10-30 17:20:55.420882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:12.542 [2024-10-30 17:20:55.420891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.542 [2024-10-30 17:20:55.420900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.542 [2024-10-30 17:20:55.420933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.542 [2024-10-30 17:20:55.420942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:12.542 [2024-10-30 17:20:55.420950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.542 [2024-10-30 17:20:55.420959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.542 [2024-10-30 17:20:55.421062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.542 [2024-10-30 17:20:55.421074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:12.542 [2024-10-30 17:20:55.421083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.542 [2024-10-30 17:20:55.421091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.542 [2024-10-30 17:20:55.421127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.542 [2024-10-30 17:20:55.421137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:12.542 [2024-10-30 17:20:55.421146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.542 [2024-10-30 17:20:55.421155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.542 [2024-10-30 17:20:55.421234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.542 [2024-10-30 17:20:55.421246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:12.542 [2024-10-30 17:20:55.421255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.542 [2024-10-30 17:20:55.421263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.542 [2024-10-30 17:20:55.421313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.542 [2024-10-30 17:20:55.421324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:12.542 [2024-10-30 17:20:55.421333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.542 [2024-10-30 17:20:55.421341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.542 [2024-10-30 17:20:55.421501] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 370.255 ms, result 0 00:17:13.483 00:17:13.483 00:17:13.483 17:20:56 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:14.054 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:14.054 17:20:56 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:14.054 17:20:56 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:14.054 17:20:56 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:14.054 17:20:56 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:14.054 17:20:56 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:14.054 17:20:56 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:14.054 Process with pid 74052 is not found 00:17:14.054 17:20:56 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74052 00:17:14.054 17:20:56 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 74052 ']' 00:17:14.054 17:20:56 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 74052 00:17:14.054 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74052) - No such process 00:17:14.054 17:20:56 ftl.ftl_trim -- common/autotest_common.sh@979 -- # echo 'Process with pid 74052 is not found' 00:17:14.054 ************************************ 00:17:14.054 END TEST ftl_trim 00:17:14.054 ************************************ 00:17:14.054 00:17:14.054 real 1m10.586s 00:17:14.054 user 1m34.207s 00:17:14.054 sys 0m5.483s 00:17:14.054 17:20:56 ftl.ftl_trim -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:14.054 17:20:56 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:14.054 17:20:56 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:14.054 17:20:56 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:14.054 17:20:56 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:14.054 17:20:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:14.054 ************************************ 00:17:14.054 START TEST ftl_restore 00:17:14.054 ************************************ 00:17:14.054 17:20:56 ftl.ftl_restore -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:14.054 * Looking for test storage... 00:17:14.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:14.054 17:20:56 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:14.054 17:20:56 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:17:14.054 17:20:56 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:14.054 17:20:57 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:14.054 17:20:57 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:14.054 17:20:57 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:14.054 17:20:57 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:14.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.054 --rc genhtml_branch_coverage=1 00:17:14.054 --rc genhtml_function_coverage=1 00:17:14.054 --rc genhtml_legend=1 00:17:14.054 --rc geninfo_all_blocks=1 00:17:14.054 --rc geninfo_unexecuted_blocks=1 00:17:14.054 00:17:14.054 ' 00:17:14.054 17:20:57 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:14.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.054 --rc genhtml_branch_coverage=1 00:17:14.054 --rc genhtml_function_coverage=1 00:17:14.054 --rc genhtml_legend=1 00:17:14.054 --rc geninfo_all_blocks=1 00:17:14.054 --rc geninfo_unexecuted_blocks=1 00:17:14.054 00:17:14.054 ' 00:17:14.054 17:20:57 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:14.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.054 --rc genhtml_branch_coverage=1 00:17:14.054 --rc genhtml_function_coverage=1 00:17:14.054 --rc genhtml_legend=1 00:17:14.054 --rc geninfo_all_blocks=1 00:17:14.054 --rc geninfo_unexecuted_blocks=1 00:17:14.054 00:17:14.054 ' 00:17:14.054 17:20:57 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:14.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.054 --rc genhtml_branch_coverage=1 00:17:14.054 --rc genhtml_function_coverage=1 00:17:14.054 --rc genhtml_legend=1 00:17:14.054 --rc geninfo_all_blocks=1 00:17:14.054 --rc geninfo_unexecuted_blocks=1 00:17:14.054 00:17:14.054 ' 00:17:14.054 17:20:57 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:14.054 17:20:57 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:14.054 17:20:57 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:14.054 17:20:57 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:14.054 17:20:57 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.QcuLumOmSc 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74336 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74336 00:17:14.315 17:20:57 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:14.315 17:20:57 ftl.ftl_restore -- common/autotest_common.sh@833 -- # '[' -z 74336 ']' 00:17:14.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.315 17:20:57 ftl.ftl_restore -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.315 17:20:57 ftl.ftl_restore -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:14.315 17:20:57 ftl.ftl_restore -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.315 17:20:57 ftl.ftl_restore -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:14.315 17:20:57 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:14.315 [2024-10-30 17:20:57.133528] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:17:14.315 [2024-10-30 17:20:57.133928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74336 ] 00:17:14.576 [2024-10-30 17:20:57.298186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.576 [2024-10-30 17:20:57.419809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.149 17:20:58 ftl.ftl_restore -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:15.149 17:20:58 ftl.ftl_restore -- common/autotest_common.sh@866 -- # return 0 00:17:15.149 17:20:58 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:15.149 17:20:58 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:15.149 17:20:58 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:15.149 17:20:58 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:15.149 17:20:58 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:15.149 17:20:58 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:15.721 17:20:58 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:15.721 17:20:58 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:15.721 17:20:58 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:15.721 17:20:58 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:17:15.721 17:20:58 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:15.721 17:20:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:17:15.721 17:20:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:17:15.721 17:20:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:15.721 17:20:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:15.721 { 00:17:15.721 "name": "nvme0n1", 00:17:15.721 "aliases": [ 00:17:15.721 "a10102e0-6207-4f4e-bc66-634c34470b29" 00:17:15.721 ], 00:17:15.721 "product_name": "NVMe disk", 00:17:15.721 "block_size": 4096, 00:17:15.721 "num_blocks": 1310720, 00:17:15.721 "uuid": "a10102e0-6207-4f4e-bc66-634c34470b29", 00:17:15.721 "numa_id": -1, 00:17:15.721 "assigned_rate_limits": { 00:17:15.721 "rw_ios_per_sec": 0, 00:17:15.721 "rw_mbytes_per_sec": 0, 00:17:15.721 "r_mbytes_per_sec": 0, 00:17:15.721 "w_mbytes_per_sec": 0 00:17:15.721 }, 00:17:15.721 "claimed": true, 00:17:15.721 "claim_type": "read_many_write_one", 00:17:15.721 "zoned": false, 00:17:15.721 "supported_io_types": { 00:17:15.721 "read": true, 00:17:15.721 "write": true, 00:17:15.721 "unmap": true, 00:17:15.721 "flush": true, 00:17:15.721 "reset": true, 00:17:15.721 "nvme_admin": true, 00:17:15.721 "nvme_io": true, 00:17:15.721 "nvme_io_md": false, 00:17:15.721 "write_zeroes": true, 00:17:15.721 "zcopy": false, 00:17:15.721 "get_zone_info": false, 00:17:15.721 "zone_management": false, 00:17:15.721 "zone_append": false, 00:17:15.721 "compare": true, 00:17:15.721 "compare_and_write": false, 00:17:15.721 "abort": true, 00:17:15.721 "seek_hole": false, 00:17:15.721 "seek_data": false, 00:17:15.721 "copy": true, 00:17:15.721 "nvme_iov_md": false 00:17:15.721 }, 00:17:15.721 "driver_specific": { 00:17:15.721 "nvme": [ 00:17:15.721 { 00:17:15.721 "pci_address": "0000:00:11.0", 00:17:15.721 "trid": { 00:17:15.721 "trtype": "PCIe", 00:17:15.721 "traddr": "0000:00:11.0" 00:17:15.721 }, 00:17:15.721 "ctrlr_data": { 00:17:15.721 "cntlid": 0, 00:17:15.721 "vendor_id": "0x1b36", 00:17:15.721 "model_number": "QEMU NVMe Ctrl", 00:17:15.721 "serial_number": "12341", 00:17:15.721 "firmware_revision": "8.0.0", 00:17:15.721 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:15.721 "oacs": { 00:17:15.721 "security": 0, 00:17:15.721 "format": 1, 00:17:15.721 "firmware": 0, 00:17:15.721 "ns_manage": 1 00:17:15.721 }, 00:17:15.721 "multi_ctrlr": false, 00:17:15.721 "ana_reporting": false 00:17:15.721 }, 00:17:15.721 "vs": { 00:17:15.721 "nvme_version": "1.4" 00:17:15.721 }, 00:17:15.721 "ns_data": { 00:17:15.721 "id": 1, 00:17:15.721 "can_share": false 00:17:15.721 } 00:17:15.721 } 00:17:15.721 ], 00:17:15.721 "mp_policy": "active_passive" 00:17:15.721 } 00:17:15.721 } 00:17:15.721 ]' 00:17:15.721 17:20:58 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:15.721 17:20:58 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:17:15.721 17:20:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:15.721 17:20:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=1310720 00:17:15.721 17:20:58 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:17:15.722 17:20:58 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 5120 00:17:15.722 17:20:58 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:15.722 17:20:58 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:15.722 17:20:58 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:15.722 17:20:58 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:15.722 17:20:58 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:15.983 17:20:58 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=1104e65f-3b09-4d84-93bf-2c8e3d81e413 00:17:15.983 17:20:58 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:15.983 17:20:58 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1104e65f-3b09-4d84-93bf-2c8e3d81e413 00:17:16.244 17:20:59 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:16.505 17:20:59 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=33f7c999-42ea-4296-87bd-66a0f6794f66 00:17:16.506 17:20:59 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 33f7c999-42ea-4296-87bd-66a0f6794f66 00:17:16.767 17:20:59 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=d961c73b-df36-42f5-8998-769d942de6b8 00:17:16.767 17:20:59 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:16.767 17:20:59 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d961c73b-df36-42f5-8998-769d942de6b8 00:17:16.767 17:20:59 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:16.767 17:20:59 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:16.767 17:20:59 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=d961c73b-df36-42f5-8998-769d942de6b8 00:17:16.767 17:20:59 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:16.767 17:20:59 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size d961c73b-df36-42f5-8998-769d942de6b8 00:17:16.767 17:20:59 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=d961c73b-df36-42f5-8998-769d942de6b8 00:17:16.767 17:20:59 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:16.767 17:20:59 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:17:16.767 17:20:59 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:17:16.767 17:20:59 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d961c73b-df36-42f5-8998-769d942de6b8 00:17:17.028 17:20:59 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:17.028 { 00:17:17.028 "name": "d961c73b-df36-42f5-8998-769d942de6b8", 00:17:17.028 "aliases": [ 00:17:17.028 "lvs/nvme0n1p0" 00:17:17.028 ], 00:17:17.028 "product_name": "Logical Volume", 00:17:17.028 "block_size": 4096, 00:17:17.028 "num_blocks": 26476544, 00:17:17.028 "uuid": "d961c73b-df36-42f5-8998-769d942de6b8", 00:17:17.028 "assigned_rate_limits": { 00:17:17.028 "rw_ios_per_sec": 0, 00:17:17.028 "rw_mbytes_per_sec": 0, 00:17:17.028 "r_mbytes_per_sec": 0, 00:17:17.028 "w_mbytes_per_sec": 0 00:17:17.028 }, 00:17:17.028 "claimed": false, 00:17:17.028 "zoned": false, 00:17:17.028 "supported_io_types": { 00:17:17.028 "read": true, 00:17:17.028 "write": true, 00:17:17.028 "unmap": true, 00:17:17.028 "flush": false, 00:17:17.028 "reset": true, 00:17:17.028 "nvme_admin": false, 00:17:17.028 "nvme_io": false, 00:17:17.028 "nvme_io_md": false, 00:17:17.028 "write_zeroes": true, 00:17:17.028 "zcopy": false, 00:17:17.028 "get_zone_info": false, 00:17:17.028 "zone_management": false, 00:17:17.028 "zone_append": false, 00:17:17.028 "compare": false, 00:17:17.028 "compare_and_write": false, 00:17:17.028 "abort": false, 00:17:17.028 "seek_hole": true, 00:17:17.028 "seek_data": true, 00:17:17.028 "copy": false, 00:17:17.028 "nvme_iov_md": false 00:17:17.028 }, 00:17:17.028 "driver_specific": { 00:17:17.028 "lvol": { 00:17:17.028 "lvol_store_uuid": "33f7c999-42ea-4296-87bd-66a0f6794f66", 00:17:17.028 "base_bdev": "nvme0n1", 00:17:17.028 "thin_provision": true, 00:17:17.028 "num_allocated_clusters": 0, 00:17:17.028 "snapshot": false, 00:17:17.028 "clone": false, 00:17:17.028 "esnap_clone": false 00:17:17.028 } 00:17:17.028 } 00:17:17.028 } 00:17:17.028 ]' 00:17:17.028 17:20:59 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:17.028 17:20:59 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:17:17.028 17:20:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:17.028 17:20:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:17.028 17:20:59 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:17.028 17:20:59 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:17:17.028 17:20:59 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:17.028 17:20:59 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:17.028 17:20:59 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:17.288 17:21:00 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:17.288 17:21:00 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:17.288 17:21:00 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size d961c73b-df36-42f5-8998-769d942de6b8 00:17:17.288 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=d961c73b-df36-42f5-8998-769d942de6b8 00:17:17.288 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:17.288 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:17:17.288 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:17:17.288 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d961c73b-df36-42f5-8998-769d942de6b8 00:17:17.548 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:17.548 { 00:17:17.548 "name": "d961c73b-df36-42f5-8998-769d942de6b8", 00:17:17.548 "aliases": [ 00:17:17.548 "lvs/nvme0n1p0" 00:17:17.548 ], 00:17:17.548 "product_name": "Logical Volume", 00:17:17.548 "block_size": 4096, 00:17:17.548 "num_blocks": 26476544, 00:17:17.548 "uuid": "d961c73b-df36-42f5-8998-769d942de6b8", 00:17:17.548 "assigned_rate_limits": { 00:17:17.548 "rw_ios_per_sec": 0, 00:17:17.548 "rw_mbytes_per_sec": 0, 00:17:17.548 "r_mbytes_per_sec": 0, 00:17:17.548 "w_mbytes_per_sec": 0 00:17:17.548 }, 00:17:17.548 "claimed": false, 00:17:17.548 "zoned": false, 00:17:17.548 "supported_io_types": { 00:17:17.548 "read": true, 00:17:17.548 "write": true, 00:17:17.548 "unmap": true, 00:17:17.548 "flush": false, 00:17:17.548 "reset": true, 00:17:17.548 "nvme_admin": false, 00:17:17.548 "nvme_io": false, 00:17:17.548 "nvme_io_md": false, 00:17:17.548 "write_zeroes": true, 00:17:17.548 "zcopy": false, 00:17:17.548 "get_zone_info": false, 00:17:17.548 "zone_management": false, 00:17:17.548 "zone_append": false, 00:17:17.548 "compare": false, 00:17:17.548 "compare_and_write": false, 00:17:17.548 "abort": false, 00:17:17.548 "seek_hole": true, 00:17:17.548 "seek_data": true, 00:17:17.548 "copy": false, 00:17:17.548 "nvme_iov_md": false 00:17:17.548 }, 00:17:17.548 "driver_specific": { 00:17:17.548 "lvol": { 00:17:17.548 "lvol_store_uuid": "33f7c999-42ea-4296-87bd-66a0f6794f66", 00:17:17.548 "base_bdev": "nvme0n1", 00:17:17.548 "thin_provision": true, 00:17:17.548 "num_allocated_clusters": 0, 00:17:17.548 "snapshot": false, 00:17:17.548 "clone": false, 00:17:17.548 "esnap_clone": false 00:17:17.548 } 00:17:17.548 } 00:17:17.548 } 00:17:17.548 ]' 00:17:17.548 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:17.548 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:17:17.548 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:17.548 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:17.548 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:17.548 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:17:17.548 17:21:00 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:17.548 17:21:00 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:17.807 17:21:00 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:17.807 17:21:00 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size d961c73b-df36-42f5-8998-769d942de6b8 00:17:17.808 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=d961c73b-df36-42f5-8998-769d942de6b8 00:17:17.808 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:17.808 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:17:17.808 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:17:17.808 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d961c73b-df36-42f5-8998-769d942de6b8 00:17:18.067 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:18.067 { 00:17:18.067 "name": "d961c73b-df36-42f5-8998-769d942de6b8", 00:17:18.067 "aliases": [ 00:17:18.067 "lvs/nvme0n1p0" 00:17:18.067 ], 00:17:18.067 "product_name": "Logical Volume", 00:17:18.067 "block_size": 4096, 00:17:18.067 "num_blocks": 26476544, 00:17:18.067 "uuid": "d961c73b-df36-42f5-8998-769d942de6b8", 00:17:18.067 "assigned_rate_limits": { 00:17:18.067 "rw_ios_per_sec": 0, 00:17:18.067 "rw_mbytes_per_sec": 0, 00:17:18.067 "r_mbytes_per_sec": 0, 00:17:18.067 "w_mbytes_per_sec": 0 00:17:18.067 }, 00:17:18.067 "claimed": false, 00:17:18.067 "zoned": false, 00:17:18.067 "supported_io_types": { 00:17:18.067 "read": true, 00:17:18.067 "write": true, 00:17:18.067 "unmap": true, 00:17:18.067 "flush": false, 00:17:18.067 "reset": true, 00:17:18.067 "nvme_admin": false, 00:17:18.067 "nvme_io": false, 00:17:18.067 "nvme_io_md": false, 00:17:18.067 "write_zeroes": true, 00:17:18.067 "zcopy": false, 00:17:18.067 "get_zone_info": false, 00:17:18.067 "zone_management": false, 00:17:18.067 "zone_append": false, 00:17:18.067 "compare": false, 00:17:18.067 "compare_and_write": false, 00:17:18.067 "abort": false, 00:17:18.067 "seek_hole": true, 00:17:18.067 "seek_data": true, 00:17:18.067 "copy": false, 00:17:18.067 "nvme_iov_md": false 00:17:18.067 }, 00:17:18.067 "driver_specific": { 00:17:18.067 "lvol": { 00:17:18.067 "lvol_store_uuid": "33f7c999-42ea-4296-87bd-66a0f6794f66", 00:17:18.067 "base_bdev": "nvme0n1", 00:17:18.067 "thin_provision": true, 00:17:18.067 "num_allocated_clusters": 0, 00:17:18.067 "snapshot": false, 00:17:18.067 "clone": false, 00:17:18.067 "esnap_clone": false 00:17:18.067 } 00:17:18.067 } 00:17:18.067 } 00:17:18.067 ]' 00:17:18.067 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:18.067 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:17:18.067 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:18.067 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:18.067 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:18.067 17:21:00 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:17:18.067 17:21:00 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:18.067 17:21:00 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d961c73b-df36-42f5-8998-769d942de6b8 --l2p_dram_limit 10' 00:17:18.067 17:21:00 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:18.067 17:21:00 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:18.067 17:21:00 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:18.067 17:21:00 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:18.067 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:18.067 17:21:00 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d961c73b-df36-42f5-8998-769d942de6b8 --l2p_dram_limit 10 -c nvc0n1p0 00:17:18.329 [2024-10-30 17:21:01.115953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.329 [2024-10-30 17:21:01.115993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:18.329 [2024-10-30 17:21:01.116007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:18.329 [2024-10-30 17:21:01.116013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.329 [2024-10-30 17:21:01.116061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.329 [2024-10-30 17:21:01.116069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:18.329 [2024-10-30 17:21:01.116077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:18.329 [2024-10-30 17:21:01.116083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.329 [2024-10-30 17:21:01.116102] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:18.329 [2024-10-30 17:21:01.116751] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:18.329 [2024-10-30 17:21:01.116778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.329 [2024-10-30 17:21:01.116784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:18.329 [2024-10-30 17:21:01.116794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:17:18.329 [2024-10-30 17:21:01.116800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.329 [2024-10-30 17:21:01.116853] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6680e01b-326a-4063-9fcb-aad95718bcea 00:17:18.329 [2024-10-30 17:21:01.117826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.329 [2024-10-30 17:21:01.117933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:18.329 [2024-10-30 17:21:01.117946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:18.329 [2024-10-30 17:21:01.117954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.329 [2024-10-30 17:21:01.122756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.329 [2024-10-30 17:21:01.122842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:18.329 [2024-10-30 17:21:01.122883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.766 ms 00:17:18.329 [2024-10-30 17:21:01.122904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.329 [2024-10-30 17:21:01.122983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.329 [2024-10-30 17:21:01.123271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:18.329 [2024-10-30 17:21:01.123308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:18.329 [2024-10-30 17:21:01.123381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.329 [2024-10-30 17:21:01.123473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.329 [2024-10-30 17:21:01.123501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:18.329 [2024-10-30 17:21:01.123517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:18.329 [2024-10-30 17:21:01.123534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.329 [2024-10-30 17:21:01.123568] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:18.329 [2024-10-30 17:21:01.126558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.329 [2024-10-30 17:21:01.126646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:18.329 [2024-10-30 17:21:01.126689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.996 ms 00:17:18.329 [2024-10-30 17:21:01.126710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.329 [2024-10-30 17:21:01.126746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.329 [2024-10-30 17:21:01.126795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:18.329 [2024-10-30 17:21:01.126815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:18.329 [2024-10-30 17:21:01.126830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.329 [2024-10-30 17:21:01.126872] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:18.329 [2024-10-30 17:21:01.126993] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:18.329 [2024-10-30 17:21:01.127053] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:18.329 [2024-10-30 17:21:01.127079] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:18.329 [2024-10-30 17:21:01.127105] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:18.329 [2024-10-30 17:21:01.127128] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:18.329 [2024-10-30 17:21:01.127152] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:18.329 [2024-10-30 17:21:01.127212] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:18.329 [2024-10-30 17:21:01.127233] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:18.329 [2024-10-30 17:21:01.127247] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:18.329 [2024-10-30 17:21:01.127266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.329 [2024-10-30 17:21:01.127281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:18.329 [2024-10-30 17:21:01.127297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:17:18.329 [2024-10-30 17:21:01.127316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.329 [2024-10-30 17:21:01.127394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.329 [2024-10-30 17:21:01.127413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:18.329 [2024-10-30 17:21:01.127430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:18.329 [2024-10-30 17:21:01.127444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.329 [2024-10-30 17:21:01.127540] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:18.329 [2024-10-30 17:21:01.127609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:18.329 [2024-10-30 17:21:01.127627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:18.329 [2024-10-30 17:21:01.127641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.329 [2024-10-30 17:21:01.127686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:18.329 [2024-10-30 17:21:01.127703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:18.329 [2024-10-30 17:21:01.127720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:18.329 [2024-10-30 17:21:01.127734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:18.329 [2024-10-30 17:21:01.127749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:18.329 [2024-10-30 17:21:01.127762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:18.329 [2024-10-30 17:21:01.127777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:18.329 [2024-10-30 17:21:01.127846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:18.329 [2024-10-30 17:21:01.127861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:18.329 [2024-10-30 17:21:01.127875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:18.329 [2024-10-30 17:21:01.127890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:18.329 [2024-10-30 17:21:01.127903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.330 [2024-10-30 17:21:01.127949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:18.330 [2024-10-30 17:21:01.127965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:18.330 [2024-10-30 17:21:01.127981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.330 [2024-10-30 17:21:01.127995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:18.330 [2024-10-30 17:21:01.128040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:18.330 [2024-10-30 17:21:01.128057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:18.330 [2024-10-30 17:21:01.128072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:18.330 [2024-10-30 17:21:01.128086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:18.330 [2024-10-30 17:21:01.128120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:18.330 [2024-10-30 17:21:01.128136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:18.330 [2024-10-30 17:21:01.128151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:18.330 [2024-10-30 17:21:01.128165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:18.330 [2024-10-30 17:21:01.128215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:18.330 [2024-10-30 17:21:01.128234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:18.330 [2024-10-30 17:21:01.128249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:18.330 [2024-10-30 17:21:01.128263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:18.330 [2024-10-30 17:21:01.128280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:18.330 [2024-10-30 17:21:01.128318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:18.330 [2024-10-30 17:21:01.128337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:18.330 [2024-10-30 17:21:01.128351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:18.330 [2024-10-30 17:21:01.128391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:18.330 [2024-10-30 17:21:01.128407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:18.330 [2024-10-30 17:21:01.128423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:18.330 [2024-10-30 17:21:01.128437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.330 [2024-10-30 17:21:01.128472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:18.330 [2024-10-30 17:21:01.128488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:18.330 [2024-10-30 17:21:01.128504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.330 [2024-10-30 17:21:01.128517] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:18.330 [2024-10-30 17:21:01.128555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:18.330 [2024-10-30 17:21:01.128572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:18.330 [2024-10-30 17:21:01.128587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.330 [2024-10-30 17:21:01.128602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:18.330 [2024-10-30 17:21:01.128620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:18.330 [2024-10-30 17:21:01.128658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:18.330 [2024-10-30 17:21:01.128677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:18.330 [2024-10-30 17:21:01.128690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:18.330 [2024-10-30 17:21:01.128729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:18.330 [2024-10-30 17:21:01.128749] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:18.330 [2024-10-30 17:21:01.128775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:18.330 [2024-10-30 17:21:01.128832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:18.330 [2024-10-30 17:21:01.128877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:18.330 [2024-10-30 17:21:01.128901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:18.330 [2024-10-30 17:21:01.128943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:18.330 [2024-10-30 17:21:01.128966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:18.330 [2024-10-30 17:21:01.128989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:18.330 [2024-10-30 17:21:01.129011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:18.330 [2024-10-30 17:21:01.129059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:18.330 [2024-10-30 17:21:01.129083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:18.330 [2024-10-30 17:21:01.129107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:18.330 [2024-10-30 17:21:01.129129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:18.330 [2024-10-30 17:21:01.129186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:18.330 [2024-10-30 17:21:01.129218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:18.330 [2024-10-30 17:21:01.129241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:18.330 [2024-10-30 17:21:01.129263] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:18.330 [2024-10-30 17:21:01.129314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:18.330 [2024-10-30 17:21:01.129354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:18.330 [2024-10-30 17:21:01.129378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:18.330 [2024-10-30 17:21:01.129399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:18.330 [2024-10-30 17:21:01.129449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:18.330 [2024-10-30 17:21:01.129474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.330 [2024-10-30 17:21:01.129500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:18.330 [2024-10-30 17:21:01.129515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.988 ms 00:17:18.330 [2024-10-30 17:21:01.129531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.330 [2024-10-30 17:21:01.129573] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:18.330 [2024-10-30 17:21:01.129632] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:22.534 [2024-10-30 17:21:04.934292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.534 [2024-10-30 17:21:04.934518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:22.534 [2024-10-30 17:21:04.934616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3804.702 ms 00:17:22.534 [2024-10-30 17:21:04.934661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.534 [2024-10-30 17:21:04.962456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.534 [2024-10-30 17:21:04.962643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:22.534 [2024-10-30 17:21:04.962766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.478 ms 00:17:22.534 [2024-10-30 17:21:04.962810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.534 [2024-10-30 17:21:04.962994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.534 [2024-10-30 17:21:04.963147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:22.534 [2024-10-30 17:21:04.963272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:17:22.534 [2024-10-30 17:21:04.963326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.534 [2024-10-30 17:21:04.996928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.534 [2024-10-30 17:21:04.996993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:22.534 [2024-10-30 17:21:04.997009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.427 ms 00:17:22.534 [2024-10-30 17:21:04.997025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.534 [2024-10-30 17:21:04.997070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.534 [2024-10-30 17:21:04.997089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:22.534 [2024-10-30 17:21:04.997104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:22.534 [2024-10-30 17:21:04.997121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.534 [2024-10-30 17:21:04.997807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.534 [2024-10-30 17:21:04.997870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:22.534 [2024-10-30 17:21:04.997886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:17:22.534 [2024-10-30 17:21:04.997899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.534 [2024-10-30 17:21:04.998052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.534 [2024-10-30 17:21:04.998071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:22.534 [2024-10-30 17:21:04.998084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:17:22.534 [2024-10-30 17:21:04.998103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.534 [2024-10-30 17:21:05.015325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.534 [2024-10-30 17:21:05.015374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:22.534 [2024-10-30 17:21:05.015390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.191 ms 00:17:22.534 [2024-10-30 17:21:05.015408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.534 [2024-10-30 17:21:05.028455] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:22.534 [2024-10-30 17:21:05.032387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.534 [2024-10-30 17:21:05.032430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:22.534 [2024-10-30 17:21:05.032450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.860 ms 00:17:22.534 [2024-10-30 17:21:05.032461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.534 [2024-10-30 17:21:05.150133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.535 [2024-10-30 17:21:05.150367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:22.535 [2024-10-30 17:21:05.150406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 117.624 ms 00:17:22.535 [2024-10-30 17:21:05.150420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.535 [2024-10-30 17:21:05.150682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.535 [2024-10-30 17:21:05.150708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:22.535 [2024-10-30 17:21:05.150729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:17:22.535 [2024-10-30 17:21:05.150747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.535 [2024-10-30 17:21:05.176615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.535 [2024-10-30 17:21:05.176668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:22.535 [2024-10-30 17:21:05.176691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.786 ms 00:17:22.535 [2024-10-30 17:21:05.176704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.535 [2024-10-30 17:21:05.201800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.535 [2024-10-30 17:21:05.201987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:22.535 [2024-10-30 17:21:05.202020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.027 ms 00:17:22.535 [2024-10-30 17:21:05.202032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.535 [2024-10-30 17:21:05.202880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.535 [2024-10-30 17:21:05.202917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:22.535 [2024-10-30 17:21:05.202934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:17:22.535 [2024-10-30 17:21:05.202945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.535 [2024-10-30 17:21:05.290676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.535 [2024-10-30 17:21:05.290729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:22.535 [2024-10-30 17:21:05.290758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.632 ms 00:17:22.535 [2024-10-30 17:21:05.290770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.535 [2024-10-30 17:21:05.318370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.535 [2024-10-30 17:21:05.318420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:22.535 [2024-10-30 17:21:05.318447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.476 ms 00:17:22.535 [2024-10-30 17:21:05.318458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.535 [2024-10-30 17:21:05.343953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.535 [2024-10-30 17:21:05.343999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:22.535 [2024-10-30 17:21:05.344019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.427 ms 00:17:22.535 [2024-10-30 17:21:05.344030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.535 [2024-10-30 17:21:05.371145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.535 [2024-10-30 17:21:05.371211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:22.535 [2024-10-30 17:21:05.371236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.046 ms 00:17:22.535 [2024-10-30 17:21:05.371247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.535 [2024-10-30 17:21:05.371331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.535 [2024-10-30 17:21:05.371347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:22.535 [2024-10-30 17:21:05.371370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:22.535 [2024-10-30 17:21:05.371381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.535 [2024-10-30 17:21:05.371498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.535 [2024-10-30 17:21:05.371511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:22.535 [2024-10-30 17:21:05.371527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:22.535 [2024-10-30 17:21:05.371538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.535 [2024-10-30 17:21:05.374144] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4257.438 ms, result 0 00:17:22.535 { 00:17:22.535 "name": "ftl0", 00:17:22.535 "uuid": "6680e01b-326a-4063-9fcb-aad95718bcea" 00:17:22.535 } 00:17:22.535 17:21:05 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:22.535 17:21:05 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:22.795 17:21:05 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:22.795 17:21:05 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:23.057 [2024-10-30 17:21:05.779910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.057 [2024-10-30 17:21:05.779959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:23.057 [2024-10-30 17:21:05.779976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:23.057 [2024-10-30 17:21:05.779998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.057 [2024-10-30 17:21:05.780029] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:23.057 [2024-10-30 17:21:05.782840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.057 [2024-10-30 17:21:05.782877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:23.057 [2024-10-30 17:21:05.782898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.785 ms 00:17:23.057 [2024-10-30 17:21:05.782910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.057 [2024-10-30 17:21:05.783252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.057 [2024-10-30 17:21:05.783280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:23.057 [2024-10-30 17:21:05.783296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:17:23.057 [2024-10-30 17:21:05.783312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.057 [2024-10-30 17:21:05.786614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.057 [2024-10-30 17:21:05.786639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:23.057 [2024-10-30 17:21:05.786655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.276 ms 00:17:23.057 [2024-10-30 17:21:05.786667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.057 [2024-10-30 17:21:05.792935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.057 [2024-10-30 17:21:05.792966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:23.057 [2024-10-30 17:21:05.792984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.236 ms 00:17:23.057 [2024-10-30 17:21:05.792996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.057 [2024-10-30 17:21:05.816961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.057 [2024-10-30 17:21:05.817000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:23.057 [2024-10-30 17:21:05.817019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.870 ms 00:17:23.057 [2024-10-30 17:21:05.817029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.057 [2024-10-30 17:21:05.832553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.057 [2024-10-30 17:21:05.832602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:23.057 [2024-10-30 17:21:05.832621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.471 ms 00:17:23.057 [2024-10-30 17:21:05.832634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.057 [2024-10-30 17:21:05.832831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.057 [2024-10-30 17:21:05.832857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:23.057 [2024-10-30 17:21:05.832873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:17:23.057 [2024-10-30 17:21:05.832885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.057 [2024-10-30 17:21:05.856284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.057 [2024-10-30 17:21:05.856318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:23.057 [2024-10-30 17:21:05.856334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.371 ms 00:17:23.057 [2024-10-30 17:21:05.856345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.057 [2024-10-30 17:21:05.879018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.057 [2024-10-30 17:21:05.879050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:23.057 [2024-10-30 17:21:05.879066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.626 ms 00:17:23.057 [2024-10-30 17:21:05.879077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.057 [2024-10-30 17:21:05.902330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.057 [2024-10-30 17:21:05.902360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:23.057 [2024-10-30 17:21:05.902377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.206 ms 00:17:23.057 [2024-10-30 17:21:05.902388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.057 [2024-10-30 17:21:05.925134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.057 [2024-10-30 17:21:05.925168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:23.057 [2024-10-30 17:21:05.925185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.653 ms 00:17:23.057 [2024-10-30 17:21:05.925196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.057 [2024-10-30 17:21:05.925253] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:23.057 [2024-10-30 17:21:05.925273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:23.057 [2024-10-30 17:21:05.925769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.925784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.925797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.925813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.925834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.925851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.925864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.925880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.925893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.925910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.925923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.925940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.925953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.925968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.925981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.925997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:23.058 [2024-10-30 17:21:05.926767] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:23.058 [2024-10-30 17:21:05.926782] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6680e01b-326a-4063-9fcb-aad95718bcea 00:17:23.058 [2024-10-30 17:21:05.926795] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:23.058 [2024-10-30 17:21:05.926814] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:23.058 [2024-10-30 17:21:05.926827] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:23.058 [2024-10-30 17:21:05.926841] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:23.058 [2024-10-30 17:21:05.926856] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:23.058 [2024-10-30 17:21:05.926871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:23.058 [2024-10-30 17:21:05.926884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:23.058 [2024-10-30 17:21:05.926898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:23.058 [2024-10-30 17:21:05.926909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:23.058 [2024-10-30 17:21:05.926925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.058 [2024-10-30 17:21:05.926937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:23.058 [2024-10-30 17:21:05.926953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.674 ms 00:17:23.058 [2024-10-30 17:21:05.926967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.058 [2024-10-30 17:21:05.940575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.058 [2024-10-30 17:21:05.940611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:23.058 [2024-10-30 17:21:05.940627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.543 ms 00:17:23.058 [2024-10-30 17:21:05.940638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.058 [2024-10-30 17:21:05.941112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.058 [2024-10-30 17:21:05.941142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:23.058 [2024-10-30 17:21:05.941158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:17:23.058 [2024-10-30 17:21:05.941171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.058 [2024-10-30 17:21:05.983673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.058 [2024-10-30 17:21:05.983710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.058 [2024-10-30 17:21:05.983726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.058 [2024-10-30 17:21:05.983737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.058 [2024-10-30 17:21:05.983817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.058 [2024-10-30 17:21:05.983831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.058 [2024-10-30 17:21:05.983847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.058 [2024-10-30 17:21:05.983859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.058 [2024-10-30 17:21:05.983959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.059 [2024-10-30 17:21:05.983974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.059 [2024-10-30 17:21:05.983991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.059 [2024-10-30 17:21:05.984003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.059 [2024-10-30 17:21:05.984033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.059 [2024-10-30 17:21:05.984046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.059 [2024-10-30 17:21:05.984061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.059 [2024-10-30 17:21:05.984073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.320 [2024-10-30 17:21:06.062119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.320 [2024-10-30 17:21:06.062156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:23.320 [2024-10-30 17:21:06.062173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.320 [2024-10-30 17:21:06.062184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.320 [2024-10-30 17:21:06.126990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.320 [2024-10-30 17:21:06.127037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:23.320 [2024-10-30 17:21:06.127054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.320 [2024-10-30 17:21:06.127065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.320 [2024-10-30 17:21:06.127184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.320 [2024-10-30 17:21:06.127221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:23.320 [2024-10-30 17:21:06.127239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.320 [2024-10-30 17:21:06.127251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.320 [2024-10-30 17:21:06.127323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.320 [2024-10-30 17:21:06.127337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:23.320 [2024-10-30 17:21:06.127353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.320 [2024-10-30 17:21:06.127365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.320 [2024-10-30 17:21:06.127499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.320 [2024-10-30 17:21:06.127514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:23.320 [2024-10-30 17:21:06.127533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.320 [2024-10-30 17:21:06.127546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.320 [2024-10-30 17:21:06.127596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.320 [2024-10-30 17:21:06.127612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:23.320 [2024-10-30 17:21:06.127627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.320 [2024-10-30 17:21:06.127640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.320 [2024-10-30 17:21:06.127694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.320 [2024-10-30 17:21:06.127721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:23.320 [2024-10-30 17:21:06.127739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.320 [2024-10-30 17:21:06.127751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.320 [2024-10-30 17:21:06.127817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.320 [2024-10-30 17:21:06.127832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:23.321 [2024-10-30 17:21:06.127848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.321 [2024-10-30 17:21:06.127860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.321 [2024-10-30 17:21:06.128042] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 348.072 ms, result 0 00:17:23.321 true 00:17:23.321 17:21:06 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74336 00:17:23.321 17:21:06 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 74336 ']' 00:17:23.321 17:21:06 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 74336 00:17:23.321 17:21:06 ftl.ftl_restore -- common/autotest_common.sh@957 -- # uname 00:17:23.321 17:21:06 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:23.321 17:21:06 ftl.ftl_restore -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74336 00:17:23.321 17:21:06 ftl.ftl_restore -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:23.321 killing process with pid 74336 00:17:23.321 17:21:06 ftl.ftl_restore -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:23.321 17:21:06 ftl.ftl_restore -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74336' 00:17:23.321 17:21:06 ftl.ftl_restore -- common/autotest_common.sh@971 -- # kill 74336 00:17:23.321 17:21:06 ftl.ftl_restore -- common/autotest_common.sh@976 -- # wait 74336 00:17:29.909 17:21:12 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:34.117 262144+0 records in 00:17:34.117 262144+0 records out 00:17:34.117 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.3014 s, 250 MB/s 00:17:34.117 17:21:16 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:36.031 17:21:18 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:36.032 [2024-10-30 17:21:18.594852] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:17:36.032 [2024-10-30 17:21:18.594970] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74583 ] 00:17:36.032 [2024-10-30 17:21:18.751151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.032 [2024-10-30 17:21:18.830560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.292 [2024-10-30 17:21:19.034069] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:36.292 [2024-10-30 17:21:19.034120] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:36.292 [2024-10-30 17:21:19.185627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.292 [2024-10-30 17:21:19.185664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:36.292 [2024-10-30 17:21:19.185676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:36.292 [2024-10-30 17:21:19.185682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.292 [2024-10-30 17:21:19.185715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.292 [2024-10-30 17:21:19.185722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:36.292 [2024-10-30 17:21:19.185730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:36.292 [2024-10-30 17:21:19.185736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.292 [2024-10-30 17:21:19.185748] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:36.292 [2024-10-30 17:21:19.186255] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:36.292 [2024-10-30 17:21:19.186275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.292 [2024-10-30 17:21:19.186281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:36.292 [2024-10-30 17:21:19.186287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:17:36.292 [2024-10-30 17:21:19.186293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.292 [2024-10-30 17:21:19.187194] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:36.292 [2024-10-30 17:21:19.196683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.292 [2024-10-30 17:21:19.196719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:36.292 [2024-10-30 17:21:19.196729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.490 ms 00:17:36.292 [2024-10-30 17:21:19.196736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.292 [2024-10-30 17:21:19.196778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.292 [2024-10-30 17:21:19.196787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:36.292 [2024-10-30 17:21:19.196793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:36.292 [2024-10-30 17:21:19.196799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.292 [2024-10-30 17:21:19.201109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.292 [2024-10-30 17:21:19.201136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:36.292 [2024-10-30 17:21:19.201143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.265 ms 00:17:36.292 [2024-10-30 17:21:19.201148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.292 [2024-10-30 17:21:19.201221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.292 [2024-10-30 17:21:19.201228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:36.292 [2024-10-30 17:21:19.201235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:36.292 [2024-10-30 17:21:19.201240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.292 [2024-10-30 17:21:19.201271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.292 [2024-10-30 17:21:19.201278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:36.292 [2024-10-30 17:21:19.201284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:36.292 [2024-10-30 17:21:19.201290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.292 [2024-10-30 17:21:19.201305] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:36.292 [2024-10-30 17:21:19.203945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.292 [2024-10-30 17:21:19.203969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:36.292 [2024-10-30 17:21:19.203976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.643 ms 00:17:36.292 [2024-10-30 17:21:19.203984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.292 [2024-10-30 17:21:19.204007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.293 [2024-10-30 17:21:19.204013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:36.293 [2024-10-30 17:21:19.204019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:36.293 [2024-10-30 17:21:19.204025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.293 [2024-10-30 17:21:19.204040] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:36.293 [2024-10-30 17:21:19.204054] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:36.293 [2024-10-30 17:21:19.204081] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:36.293 [2024-10-30 17:21:19.204094] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:36.293 [2024-10-30 17:21:19.204178] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:36.293 [2024-10-30 17:21:19.204190] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:36.293 [2024-10-30 17:21:19.204206] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:36.293 [2024-10-30 17:21:19.204213] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:36.293 [2024-10-30 17:21:19.204220] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:36.293 [2024-10-30 17:21:19.204227] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:36.293 [2024-10-30 17:21:19.204233] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:36.293 [2024-10-30 17:21:19.204238] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:36.293 [2024-10-30 17:21:19.204243] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:36.293 [2024-10-30 17:21:19.204251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.293 [2024-10-30 17:21:19.204257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:36.293 [2024-10-30 17:21:19.204262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:17:36.293 [2024-10-30 17:21:19.204268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.293 [2024-10-30 17:21:19.204330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.293 [2024-10-30 17:21:19.204336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:36.293 [2024-10-30 17:21:19.204342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:36.293 [2024-10-30 17:21:19.204347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.293 [2024-10-30 17:21:19.204423] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:36.293 [2024-10-30 17:21:19.204441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:36.293 [2024-10-30 17:21:19.204447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:36.293 [2024-10-30 17:21:19.204453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.293 [2024-10-30 17:21:19.204459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:36.293 [2024-10-30 17:21:19.204464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:36.293 [2024-10-30 17:21:19.204469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:36.293 [2024-10-30 17:21:19.204475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:36.293 [2024-10-30 17:21:19.204480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:36.293 [2024-10-30 17:21:19.204485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:36.293 [2024-10-30 17:21:19.204490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:36.293 [2024-10-30 17:21:19.204495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:36.293 [2024-10-30 17:21:19.204499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:36.293 [2024-10-30 17:21:19.204504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:36.293 [2024-10-30 17:21:19.204509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:36.293 [2024-10-30 17:21:19.204518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.293 [2024-10-30 17:21:19.204523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:36.293 [2024-10-30 17:21:19.204527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:36.293 [2024-10-30 17:21:19.204532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.293 [2024-10-30 17:21:19.204537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:36.293 [2024-10-30 17:21:19.204544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:36.293 [2024-10-30 17:21:19.204549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:36.293 [2024-10-30 17:21:19.204554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:36.293 [2024-10-30 17:21:19.204559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:36.293 [2024-10-30 17:21:19.204564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:36.293 [2024-10-30 17:21:19.204569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:36.293 [2024-10-30 17:21:19.204574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:36.293 [2024-10-30 17:21:19.204579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:36.293 [2024-10-30 17:21:19.204584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:36.293 [2024-10-30 17:21:19.204588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:36.293 [2024-10-30 17:21:19.204593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:36.293 [2024-10-30 17:21:19.204598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:36.293 [2024-10-30 17:21:19.204602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:36.293 [2024-10-30 17:21:19.204607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:36.293 [2024-10-30 17:21:19.204612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:36.293 [2024-10-30 17:21:19.204616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:36.293 [2024-10-30 17:21:19.204621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:36.293 [2024-10-30 17:21:19.204626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:36.293 [2024-10-30 17:21:19.204631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:36.293 [2024-10-30 17:21:19.204636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.293 [2024-10-30 17:21:19.204640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:36.293 [2024-10-30 17:21:19.204646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:36.293 [2024-10-30 17:21:19.204650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.293 [2024-10-30 17:21:19.204655] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:36.293 [2024-10-30 17:21:19.204661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:36.293 [2024-10-30 17:21:19.204666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:36.293 [2024-10-30 17:21:19.204672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.293 [2024-10-30 17:21:19.204677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:36.293 [2024-10-30 17:21:19.204682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:36.293 [2024-10-30 17:21:19.204687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:36.293 [2024-10-30 17:21:19.204692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:36.293 [2024-10-30 17:21:19.204697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:36.293 [2024-10-30 17:21:19.204703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:36.293 [2024-10-30 17:21:19.204710] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:36.293 [2024-10-30 17:21:19.204716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:36.293 [2024-10-30 17:21:19.204723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:36.293 [2024-10-30 17:21:19.204728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:36.293 [2024-10-30 17:21:19.204733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:36.293 [2024-10-30 17:21:19.204738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:36.293 [2024-10-30 17:21:19.204744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:36.293 [2024-10-30 17:21:19.204749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:36.293 [2024-10-30 17:21:19.204754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:36.293 [2024-10-30 17:21:19.204759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:36.293 [2024-10-30 17:21:19.204764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:36.293 [2024-10-30 17:21:19.204769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:36.293 [2024-10-30 17:21:19.204774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:36.293 [2024-10-30 17:21:19.204779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:36.293 [2024-10-30 17:21:19.204784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:36.293 [2024-10-30 17:21:19.204789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:36.293 [2024-10-30 17:21:19.204794] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:36.293 [2024-10-30 17:21:19.204801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:36.293 [2024-10-30 17:21:19.204809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:36.293 [2024-10-30 17:21:19.204814] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:36.293 [2024-10-30 17:21:19.204819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:36.293 [2024-10-30 17:21:19.204825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:36.293 [2024-10-30 17:21:19.204830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.293 [2024-10-30 17:21:19.204835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:36.293 [2024-10-30 17:21:19.204841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:17:36.293 [2024-10-30 17:21:19.204846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.293 [2024-10-30 17:21:19.226087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.293 [2024-10-30 17:21:19.226114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:36.293 [2024-10-30 17:21:19.226123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.210 ms 00:17:36.293 [2024-10-30 17:21:19.226129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.293 [2024-10-30 17:21:19.226194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.293 [2024-10-30 17:21:19.226213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:36.293 [2024-10-30 17:21:19.226219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:36.293 [2024-10-30 17:21:19.226224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.293 [2024-10-30 17:21:19.266748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.293 [2024-10-30 17:21:19.266779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:36.293 [2024-10-30 17:21:19.266789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.486 ms 00:17:36.293 [2024-10-30 17:21:19.266795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.293 [2024-10-30 17:21:19.266825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.293 [2024-10-30 17:21:19.266832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:36.293 [2024-10-30 17:21:19.266838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:36.293 [2024-10-30 17:21:19.266847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.293 [2024-10-30 17:21:19.267157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.293 [2024-10-30 17:21:19.267181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:36.293 [2024-10-30 17:21:19.267188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:17:36.293 [2024-10-30 17:21:19.267194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.293 [2024-10-30 17:21:19.267300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.293 [2024-10-30 17:21:19.267315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:36.293 [2024-10-30 17:21:19.267322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:36.293 [2024-10-30 17:21:19.267328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.555 [2024-10-30 17:21:19.277691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.555 [2024-10-30 17:21:19.277718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:36.555 [2024-10-30 17:21:19.277725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.345 ms 00:17:36.555 [2024-10-30 17:21:19.277731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.555 [2024-10-30 17:21:19.287381] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:36.555 [2024-10-30 17:21:19.287410] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:36.555 [2024-10-30 17:21:19.287419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.555 [2024-10-30 17:21:19.287426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:36.555 [2024-10-30 17:21:19.287433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.606 ms 00:17:36.555 [2024-10-30 17:21:19.287438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.555 [2024-10-30 17:21:19.305656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.555 [2024-10-30 17:21:19.305693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:36.555 [2024-10-30 17:21:19.305705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.186 ms 00:17:36.555 [2024-10-30 17:21:19.305711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.555 [2024-10-30 17:21:19.314508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.555 [2024-10-30 17:21:19.314539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:36.555 [2024-10-30 17:21:19.314546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.765 ms 00:17:36.555 [2024-10-30 17:21:19.314551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.555 [2024-10-30 17:21:19.322983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.555 [2024-10-30 17:21:19.323009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:36.555 [2024-10-30 17:21:19.323016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.407 ms 00:17:36.555 [2024-10-30 17:21:19.323022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.555 [2024-10-30 17:21:19.323491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.555 [2024-10-30 17:21:19.323511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:36.555 [2024-10-30 17:21:19.323518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:17:36.555 [2024-10-30 17:21:19.323523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.555 [2024-10-30 17:21:19.367360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.555 [2024-10-30 17:21:19.367393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:36.555 [2024-10-30 17:21:19.367401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.824 ms 00:17:36.555 [2024-10-30 17:21:19.367408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.555 [2024-10-30 17:21:19.375190] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:36.555 [2024-10-30 17:21:19.377017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.555 [2024-10-30 17:21:19.377042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:36.555 [2024-10-30 17:21:19.377051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.575 ms 00:17:36.555 [2024-10-30 17:21:19.377059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.555 [2024-10-30 17:21:19.377110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.555 [2024-10-30 17:21:19.377119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:36.555 [2024-10-30 17:21:19.377127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:36.555 [2024-10-30 17:21:19.377134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.555 [2024-10-30 17:21:19.377177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.555 [2024-10-30 17:21:19.377187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:36.555 [2024-10-30 17:21:19.377194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:36.555 [2024-10-30 17:21:19.377211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.555 [2024-10-30 17:21:19.377227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.555 [2024-10-30 17:21:19.377235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:36.555 [2024-10-30 17:21:19.377242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:36.555 [2024-10-30 17:21:19.377248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.555 [2024-10-30 17:21:19.377273] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:36.555 [2024-10-30 17:21:19.377281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.555 [2024-10-30 17:21:19.377288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:36.555 [2024-10-30 17:21:19.377296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:36.555 [2024-10-30 17:21:19.377303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.555 [2024-10-30 17:21:19.394695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.555 [2024-10-30 17:21:19.394721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:36.555 [2024-10-30 17:21:19.394730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.378 ms 00:17:36.555 [2024-10-30 17:21:19.394736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.555 [2024-10-30 17:21:19.394790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.555 [2024-10-30 17:21:19.394797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:36.555 [2024-10-30 17:21:19.394803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:36.555 [2024-10-30 17:21:19.394809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.555 [2024-10-30 17:21:19.395514] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 209.558 ms, result 0 00:17:37.496  [2024-10-30T17:21:21.416Z] Copying: 25/1024 [MB] (25 MBps) [2024-10-30T17:21:22.800Z] Copying: 53/1024 [MB] (27 MBps) [2024-10-30T17:21:23.743Z] Copying: 67/1024 [MB] (14 MBps) [2024-10-30T17:21:24.741Z] Copying: 84/1024 [MB] (16 MBps) [2024-10-30T17:21:25.692Z] Copying: 101/1024 [MB] (17 MBps) [2024-10-30T17:21:26.634Z] Copying: 117/1024 [MB] (15 MBps) [2024-10-30T17:21:27.576Z] Copying: 150/1024 [MB] (33 MBps) [2024-10-30T17:21:28.517Z] Copying: 185/1024 [MB] (34 MBps) [2024-10-30T17:21:29.459Z] Copying: 205/1024 [MB] (19 MBps) [2024-10-30T17:21:30.844Z] Copying: 229/1024 [MB] (24 MBps) [2024-10-30T17:21:31.417Z] Copying: 258/1024 [MB] (28 MBps) [2024-10-30T17:21:32.804Z] Copying: 276/1024 [MB] (17 MBps) [2024-10-30T17:21:33.748Z] Copying: 305/1024 [MB] (29 MBps) [2024-10-30T17:21:34.692Z] Copying: 330/1024 [MB] (24 MBps) [2024-10-30T17:21:35.636Z] Copying: 363/1024 [MB] (33 MBps) [2024-10-30T17:21:36.581Z] Copying: 389/1024 [MB] (25 MBps) [2024-10-30T17:21:37.525Z] Copying: 417/1024 [MB] (28 MBps) [2024-10-30T17:21:38.470Z] Copying: 436/1024 [MB] (19 MBps) [2024-10-30T17:21:39.415Z] Copying: 455/1024 [MB] (18 MBps) [2024-10-30T17:21:40.801Z] Copying: 469/1024 [MB] (14 MBps) [2024-10-30T17:21:41.746Z] Copying: 501/1024 [MB] (32 MBps) [2024-10-30T17:21:42.691Z] Copying: 524/1024 [MB] (23 MBps) [2024-10-30T17:21:43.636Z] Copying: 542/1024 [MB] (17 MBps) [2024-10-30T17:21:44.581Z] Copying: 560/1024 [MB] (18 MBps) [2024-10-30T17:21:45.525Z] Copying: 580/1024 [MB] (19 MBps) [2024-10-30T17:21:46.467Z] Copying: 603/1024 [MB] (22 MBps) [2024-10-30T17:21:47.410Z] Copying: 620/1024 [MB] (17 MBps) [2024-10-30T17:21:48.794Z] Copying: 648/1024 [MB] (28 MBps) [2024-10-30T17:21:49.738Z] Copying: 660/1024 [MB] (11 MBps) [2024-10-30T17:21:50.680Z] Copying: 674/1024 [MB] (13 MBps) [2024-10-30T17:21:51.624Z] Copying: 697/1024 [MB] (23 MBps) [2024-10-30T17:21:52.567Z] Copying: 715/1024 [MB] (18 MBps) [2024-10-30T17:21:53.508Z] Copying: 742/1024 [MB] (27 MBps) [2024-10-30T17:21:54.452Z] Copying: 760/1024 [MB] (17 MBps) [2024-10-30T17:21:55.836Z] Copying: 788/1024 [MB] (27 MBps) [2024-10-30T17:21:56.453Z] Copying: 809/1024 [MB] (21 MBps) [2024-10-30T17:21:57.441Z] Copying: 841/1024 [MB] (32 MBps) [2024-10-30T17:21:58.828Z] Copying: 865/1024 [MB] (24 MBps) [2024-10-30T17:21:59.776Z] Copying: 885/1024 [MB] (19 MBps) [2024-10-30T17:22:00.721Z] Copying: 906/1024 [MB] (20 MBps) [2024-10-30T17:22:01.664Z] Copying: 926/1024 [MB] (20 MBps) [2024-10-30T17:22:02.608Z] Copying: 964/1024 [MB] (38 MBps) [2024-10-30T17:22:03.552Z] Copying: 998/1024 [MB] (33 MBps) [2024-10-30T17:22:03.814Z] Copying: 1017/1024 [MB] (18 MBps) [2024-10-30T17:22:03.814Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-10-30 17:22:03.705892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.833 [2024-10-30 17:22:03.706086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:20.833 [2024-10-30 17:22:03.706111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:20.833 [2024-10-30 17:22:03.706121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.833 [2024-10-30 17:22:03.706150] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:20.833 [2024-10-30 17:22:03.709189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.833 [2024-10-30 17:22:03.709252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:20.833 [2024-10-30 17:22:03.709265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.023 ms 00:18:20.833 [2024-10-30 17:22:03.709273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.833 [2024-10-30 17:22:03.711856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.833 [2024-10-30 17:22:03.711908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:20.833 [2024-10-30 17:22:03.711921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.545 ms 00:18:20.833 [2024-10-30 17:22:03.711929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.833 [2024-10-30 17:22:03.731331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.833 [2024-10-30 17:22:03.731383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:20.833 [2024-10-30 17:22:03.731396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.383 ms 00:18:20.833 [2024-10-30 17:22:03.731404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.833 [2024-10-30 17:22:03.737588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.833 [2024-10-30 17:22:03.737635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:20.833 [2024-10-30 17:22:03.737647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.138 ms 00:18:20.833 [2024-10-30 17:22:03.737655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.833 [2024-10-30 17:22:03.764986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.833 [2024-10-30 17:22:03.765033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:20.833 [2024-10-30 17:22:03.765046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.271 ms 00:18:20.833 [2024-10-30 17:22:03.765054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.833 [2024-10-30 17:22:03.782416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.833 [2024-10-30 17:22:03.782463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:20.833 [2024-10-30 17:22:03.782476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.313 ms 00:18:20.833 [2024-10-30 17:22:03.782484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.833 [2024-10-30 17:22:03.782631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.833 [2024-10-30 17:22:03.782644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:20.833 [2024-10-30 17:22:03.782653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:18:20.833 [2024-10-30 17:22:03.782668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.833 [2024-10-30 17:22:03.808975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.833 [2024-10-30 17:22:03.809017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:20.833 [2024-10-30 17:22:03.809029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.291 ms 00:18:20.833 [2024-10-30 17:22:03.809036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.096 [2024-10-30 17:22:03.835258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.096 [2024-10-30 17:22:03.835301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:21.096 [2024-10-30 17:22:03.835324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.173 ms 00:18:21.096 [2024-10-30 17:22:03.835331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.096 [2024-10-30 17:22:03.860644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.096 [2024-10-30 17:22:03.860687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:21.096 [2024-10-30 17:22:03.860699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.261 ms 00:18:21.096 [2024-10-30 17:22:03.860706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.096 [2024-10-30 17:22:03.885882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.096 [2024-10-30 17:22:03.885925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:21.096 [2024-10-30 17:22:03.885936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.085 ms 00:18:21.096 [2024-10-30 17:22:03.885943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.096 [2024-10-30 17:22:03.885989] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:21.096 [2024-10-30 17:22:03.886006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:21.096 [2024-10-30 17:22:03.886017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:21.096 [2024-10-30 17:22:03.886026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:21.096 [2024-10-30 17:22:03.886033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:21.096 [2024-10-30 17:22:03.886041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:21.096 [2024-10-30 17:22:03.886049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:21.096 [2024-10-30 17:22:03.886057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:21.096 [2024-10-30 17:22:03.886065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:21.096 [2024-10-30 17:22:03.886072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:21.096 [2024-10-30 17:22:03.886082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:21.096 [2024-10-30 17:22:03.886089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:21.096 [2024-10-30 17:22:03.886096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:21.096 [2024-10-30 17:22:03.886103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:21.096 [2024-10-30 17:22:03.886110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:21.096 [2024-10-30 17:22:03.886117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:21.096 [2024-10-30 17:22:03.886124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:21.096 [2024-10-30 17:22:03.886131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:21.096 [2024-10-30 17:22:03.886138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:21.096 [2024-10-30 17:22:03.886146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:21.096 [2024-10-30 17:22:03.886153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:21.097 [2024-10-30 17:22:03.886837] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:21.097 [2024-10-30 17:22:03.886852] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6680e01b-326a-4063-9fcb-aad95718bcea 00:18:21.097 [2024-10-30 17:22:03.886860] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:21.097 [2024-10-30 17:22:03.886870] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:21.097 [2024-10-30 17:22:03.886877] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:21.097 [2024-10-30 17:22:03.886885] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:21.097 [2024-10-30 17:22:03.886892] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:21.097 [2024-10-30 17:22:03.886899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:21.097 [2024-10-30 17:22:03.886907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:21.097 [2024-10-30 17:22:03.886920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:21.097 [2024-10-30 17:22:03.886926] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:21.097 [2024-10-30 17:22:03.886934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.097 [2024-10-30 17:22:03.886942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:21.097 [2024-10-30 17:22:03.886951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:18:21.097 [2024-10-30 17:22:03.886958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.097 [2024-10-30 17:22:03.900572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.097 [2024-10-30 17:22:03.900610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:21.097 [2024-10-30 17:22:03.900621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.576 ms 00:18:21.097 [2024-10-30 17:22:03.900628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.098 [2024-10-30 17:22:03.901022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.098 [2024-10-30 17:22:03.901034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:21.098 [2024-10-30 17:22:03.901042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:18:21.098 [2024-10-30 17:22:03.901049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.098 [2024-10-30 17:22:03.937874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.098 [2024-10-30 17:22:03.937918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:21.098 [2024-10-30 17:22:03.937930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.098 [2024-10-30 17:22:03.937940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.098 [2024-10-30 17:22:03.938012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.098 [2024-10-30 17:22:03.938021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:21.098 [2024-10-30 17:22:03.938031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.098 [2024-10-30 17:22:03.938040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.098 [2024-10-30 17:22:03.938114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.098 [2024-10-30 17:22:03.938125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:21.098 [2024-10-30 17:22:03.938135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.098 [2024-10-30 17:22:03.938143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.098 [2024-10-30 17:22:03.938160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.098 [2024-10-30 17:22:03.938169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:21.098 [2024-10-30 17:22:03.938178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.098 [2024-10-30 17:22:03.938186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.098 [2024-10-30 17:22:04.022068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.098 [2024-10-30 17:22:04.022139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:21.098 [2024-10-30 17:22:04.022152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.098 [2024-10-30 17:22:04.022160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.359 [2024-10-30 17:22:04.091153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.359 [2024-10-30 17:22:04.091221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:21.359 [2024-10-30 17:22:04.091233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.359 [2024-10-30 17:22:04.091242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.359 [2024-10-30 17:22:04.091320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.359 [2024-10-30 17:22:04.091338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:21.359 [2024-10-30 17:22:04.091348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.359 [2024-10-30 17:22:04.091357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.359 [2024-10-30 17:22:04.091395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.359 [2024-10-30 17:22:04.091404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:21.359 [2024-10-30 17:22:04.091413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.359 [2024-10-30 17:22:04.091420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.359 [2024-10-30 17:22:04.091516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.359 [2024-10-30 17:22:04.091526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:21.359 [2024-10-30 17:22:04.091539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.359 [2024-10-30 17:22:04.091547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.359 [2024-10-30 17:22:04.091578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.359 [2024-10-30 17:22:04.091588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:21.359 [2024-10-30 17:22:04.091597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.359 [2024-10-30 17:22:04.091605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.359 [2024-10-30 17:22:04.091644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.359 [2024-10-30 17:22:04.091653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:21.359 [2024-10-30 17:22:04.091665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.360 [2024-10-30 17:22:04.091673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.360 [2024-10-30 17:22:04.091718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.360 [2024-10-30 17:22:04.091729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:21.360 [2024-10-30 17:22:04.091737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.360 [2024-10-30 17:22:04.091745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.360 [2024-10-30 17:22:04.091874] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 385.955 ms, result 0 00:18:22.302 00:18:22.302 00:18:22.302 17:22:05 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:22.302 [2024-10-30 17:22:05.211543] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:18:22.302 [2024-10-30 17:22:05.211910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75061 ] 00:18:22.562 [2024-10-30 17:22:05.375952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.562 [2024-10-30 17:22:05.496901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.824 [2024-10-30 17:22:05.786847] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.824 [2024-10-30 17:22:05.786934] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:23.086 [2024-10-30 17:22:05.947573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.086 [2024-10-30 17:22:05.947637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:23.086 [2024-10-30 17:22:05.947657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:23.086 [2024-10-30 17:22:05.947665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.086 [2024-10-30 17:22:05.947721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.086 [2024-10-30 17:22:05.947732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:23.086 [2024-10-30 17:22:05.947743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:23.086 [2024-10-30 17:22:05.947752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.086 [2024-10-30 17:22:05.947773] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:23.086 [2024-10-30 17:22:05.948544] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:23.086 [2024-10-30 17:22:05.948576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.086 [2024-10-30 17:22:05.948584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:23.086 [2024-10-30 17:22:05.948594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:18:23.086 [2024-10-30 17:22:05.948602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.086 [2024-10-30 17:22:05.950472] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:23.086 [2024-10-30 17:22:05.964688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.086 [2024-10-30 17:22:05.964749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:23.086 [2024-10-30 17:22:05.964764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.218 ms 00:18:23.086 [2024-10-30 17:22:05.964772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.086 [2024-10-30 17:22:05.964850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.086 [2024-10-30 17:22:05.964863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:23.086 [2024-10-30 17:22:05.964872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:23.086 [2024-10-30 17:22:05.964880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.086 [2024-10-30 17:22:05.973086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.086 [2024-10-30 17:22:05.973129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:23.086 [2024-10-30 17:22:05.973139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.126 ms 00:18:23.086 [2024-10-30 17:22:05.973147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.086 [2024-10-30 17:22:05.973247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.086 [2024-10-30 17:22:05.973257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:23.086 [2024-10-30 17:22:05.973266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:23.086 [2024-10-30 17:22:05.973275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.086 [2024-10-30 17:22:05.973324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.086 [2024-10-30 17:22:05.973335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:23.086 [2024-10-30 17:22:05.973343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:23.086 [2024-10-30 17:22:05.973350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.086 [2024-10-30 17:22:05.973375] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:23.086 [2024-10-30 17:22:05.977354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.086 [2024-10-30 17:22:05.977395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:23.086 [2024-10-30 17:22:05.977405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.985 ms 00:18:23.086 [2024-10-30 17:22:05.977417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.086 [2024-10-30 17:22:05.977453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.086 [2024-10-30 17:22:05.977462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:23.086 [2024-10-30 17:22:05.977471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:23.086 [2024-10-30 17:22:05.977479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.087 [2024-10-30 17:22:05.977532] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:23.087 [2024-10-30 17:22:05.977555] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:23.087 [2024-10-30 17:22:05.977593] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:23.087 [2024-10-30 17:22:05.977613] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:23.087 [2024-10-30 17:22:05.977718] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:23.087 [2024-10-30 17:22:05.977729] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:23.087 [2024-10-30 17:22:05.977740] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:23.087 [2024-10-30 17:22:05.977751] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:23.087 [2024-10-30 17:22:05.977761] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:23.087 [2024-10-30 17:22:05.977769] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:23.087 [2024-10-30 17:22:05.977776] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:23.087 [2024-10-30 17:22:05.977784] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:23.087 [2024-10-30 17:22:05.977794] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:23.087 [2024-10-30 17:22:05.977805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.087 [2024-10-30 17:22:05.977812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:23.087 [2024-10-30 17:22:05.977821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:18:23.087 [2024-10-30 17:22:05.977829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.087 [2024-10-30 17:22:05.977927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.087 [2024-10-30 17:22:05.977934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:23.087 [2024-10-30 17:22:05.977942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:23.087 [2024-10-30 17:22:05.977949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.087 [2024-10-30 17:22:05.978054] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:23.087 [2024-10-30 17:22:05.978078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:23.087 [2024-10-30 17:22:05.978088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:23.087 [2024-10-30 17:22:05.978096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.087 [2024-10-30 17:22:05.978104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:23.087 [2024-10-30 17:22:05.978110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:23.087 [2024-10-30 17:22:05.978118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:23.087 [2024-10-30 17:22:05.978125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:23.087 [2024-10-30 17:22:05.978132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:23.087 [2024-10-30 17:22:05.978139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:23.087 [2024-10-30 17:22:05.978148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:23.087 [2024-10-30 17:22:05.978155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:23.087 [2024-10-30 17:22:05.978163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:23.087 [2024-10-30 17:22:05.978170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:23.087 [2024-10-30 17:22:05.978176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:23.087 [2024-10-30 17:22:05.978189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.087 [2024-10-30 17:22:05.978222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:23.087 [2024-10-30 17:22:05.978231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:23.087 [2024-10-30 17:22:05.978238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.087 [2024-10-30 17:22:05.978245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:23.087 [2024-10-30 17:22:05.978253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:23.087 [2024-10-30 17:22:05.978260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.087 [2024-10-30 17:22:05.978267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:23.087 [2024-10-30 17:22:05.978274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:23.087 [2024-10-30 17:22:05.978281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.087 [2024-10-30 17:22:05.978289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:23.087 [2024-10-30 17:22:05.978295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:23.087 [2024-10-30 17:22:05.978303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.087 [2024-10-30 17:22:05.978309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:23.087 [2024-10-30 17:22:05.978316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:23.087 [2024-10-30 17:22:05.978323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.087 [2024-10-30 17:22:05.978329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:23.087 [2024-10-30 17:22:05.978336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:23.087 [2024-10-30 17:22:05.978343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:23.087 [2024-10-30 17:22:05.978350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:23.087 [2024-10-30 17:22:05.978357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:23.087 [2024-10-30 17:22:05.978364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:23.087 [2024-10-30 17:22:05.978370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:23.087 [2024-10-30 17:22:05.978376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:23.087 [2024-10-30 17:22:05.978383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.087 [2024-10-30 17:22:05.978391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:23.087 [2024-10-30 17:22:05.978398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:23.087 [2024-10-30 17:22:05.978406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.087 [2024-10-30 17:22:05.978414] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:23.087 [2024-10-30 17:22:05.978422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:23.087 [2024-10-30 17:22:05.978429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:23.087 [2024-10-30 17:22:05.978437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.087 [2024-10-30 17:22:05.978444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:23.087 [2024-10-30 17:22:05.978451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:23.087 [2024-10-30 17:22:05.978459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:23.087 [2024-10-30 17:22:05.978465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:23.087 [2024-10-30 17:22:05.978472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:23.087 [2024-10-30 17:22:05.978479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:23.087 [2024-10-30 17:22:05.978487] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:23.087 [2024-10-30 17:22:05.978497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:23.087 [2024-10-30 17:22:05.978505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:23.087 [2024-10-30 17:22:05.978512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:23.087 [2024-10-30 17:22:05.978519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:23.087 [2024-10-30 17:22:05.978527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:23.087 [2024-10-30 17:22:05.978533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:23.088 [2024-10-30 17:22:05.978540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:23.088 [2024-10-30 17:22:05.978547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:23.088 [2024-10-30 17:22:05.978554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:23.088 [2024-10-30 17:22:05.978561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:23.088 [2024-10-30 17:22:05.978567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:23.088 [2024-10-30 17:22:05.978575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:23.088 [2024-10-30 17:22:05.978583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:23.088 [2024-10-30 17:22:05.978590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:23.088 [2024-10-30 17:22:05.978597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:23.088 [2024-10-30 17:22:05.978613] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:23.088 [2024-10-30 17:22:05.978621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:23.088 [2024-10-30 17:22:05.978631] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:23.088 [2024-10-30 17:22:05.978638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:23.088 [2024-10-30 17:22:05.978645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:23.088 [2024-10-30 17:22:05.978654] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:23.088 [2024-10-30 17:22:05.978662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.088 [2024-10-30 17:22:05.978670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:23.088 [2024-10-30 17:22:05.978678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:18:23.088 [2024-10-30 17:22:05.978686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.088 [2024-10-30 17:22:06.010412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.088 [2024-10-30 17:22:06.010459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:23.088 [2024-10-30 17:22:06.010471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.680 ms 00:18:23.088 [2024-10-30 17:22:06.010480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.088 [2024-10-30 17:22:06.010573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.088 [2024-10-30 17:22:06.010587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:23.088 [2024-10-30 17:22:06.010596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:23.088 [2024-10-30 17:22:06.010604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.088 [2024-10-30 17:22:06.053082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.088 [2024-10-30 17:22:06.053137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:23.088 [2024-10-30 17:22:06.053151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.419 ms 00:18:23.088 [2024-10-30 17:22:06.053160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.088 [2024-10-30 17:22:06.053224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.088 [2024-10-30 17:22:06.053235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:23.088 [2024-10-30 17:22:06.053245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:23.088 [2024-10-30 17:22:06.053257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.088 [2024-10-30 17:22:06.053904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.088 [2024-10-30 17:22:06.053945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:23.088 [2024-10-30 17:22:06.053956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:18:23.088 [2024-10-30 17:22:06.053965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.088 [2024-10-30 17:22:06.054126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.088 [2024-10-30 17:22:06.054144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:23.088 [2024-10-30 17:22:06.054154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:18:23.088 [2024-10-30 17:22:06.054162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.350 [2024-10-30 17:22:06.069748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.350 [2024-10-30 17:22:06.069795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:23.350 [2024-10-30 17:22:06.069806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.558 ms 00:18:23.350 [2024-10-30 17:22:06.069818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.350 [2024-10-30 17:22:06.084310] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:23.350 [2024-10-30 17:22:06.084358] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:23.350 [2024-10-30 17:22:06.084373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.350 [2024-10-30 17:22:06.084381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:23.350 [2024-10-30 17:22:06.084392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.429 ms 00:18:23.350 [2024-10-30 17:22:06.084399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.350 [2024-10-30 17:22:06.110811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.350 [2024-10-30 17:22:06.110865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:23.350 [2024-10-30 17:22:06.110878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.357 ms 00:18:23.350 [2024-10-30 17:22:06.110886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.350 [2024-10-30 17:22:06.123909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.350 [2024-10-30 17:22:06.123956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:23.350 [2024-10-30 17:22:06.123968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.966 ms 00:18:23.350 [2024-10-30 17:22:06.123976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.350 [2024-10-30 17:22:06.136770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.350 [2024-10-30 17:22:06.136820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:23.350 [2024-10-30 17:22:06.136833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.746 ms 00:18:23.350 [2024-10-30 17:22:06.136840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.350 [2024-10-30 17:22:06.137505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.350 [2024-10-30 17:22:06.137536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:23.350 [2024-10-30 17:22:06.137548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:18:23.350 [2024-10-30 17:22:06.137556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.350 [2024-10-30 17:22:06.204998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.350 [2024-10-30 17:22:06.205061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:23.350 [2024-10-30 17:22:06.205077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.419 ms 00:18:23.350 [2024-10-30 17:22:06.205093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.350 [2024-10-30 17:22:06.216526] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:23.350 [2024-10-30 17:22:06.219643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.350 [2024-10-30 17:22:06.219686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:23.350 [2024-10-30 17:22:06.219698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.491 ms 00:18:23.350 [2024-10-30 17:22:06.219706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.350 [2024-10-30 17:22:06.219791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.350 [2024-10-30 17:22:06.219802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:23.350 [2024-10-30 17:22:06.219811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:23.350 [2024-10-30 17:22:06.219820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.350 [2024-10-30 17:22:06.219892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.350 [2024-10-30 17:22:06.219905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:23.350 [2024-10-30 17:22:06.219914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:23.350 [2024-10-30 17:22:06.219922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.350 [2024-10-30 17:22:06.219943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.350 [2024-10-30 17:22:06.219951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:23.350 [2024-10-30 17:22:06.219960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:23.350 [2024-10-30 17:22:06.219968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.350 [2024-10-30 17:22:06.220005] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:23.350 [2024-10-30 17:22:06.220020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.350 [2024-10-30 17:22:06.220029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:23.350 [2024-10-30 17:22:06.220038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:23.350 [2024-10-30 17:22:06.220045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.350 [2024-10-30 17:22:06.246024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.350 [2024-10-30 17:22:06.246076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:23.350 [2024-10-30 17:22:06.246089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.959 ms 00:18:23.350 [2024-10-30 17:22:06.246098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.350 [2024-10-30 17:22:06.246192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.350 [2024-10-30 17:22:06.246219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:23.350 [2024-10-30 17:22:06.246228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:23.350 [2024-10-30 17:22:06.246237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.350 [2024-10-30 17:22:06.248159] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 300.107 ms, result 0 00:18:24.739  [2024-10-30T17:22:08.664Z] Copying: 20/1024 [MB] (20 MBps) [2024-10-30T17:22:09.606Z] Copying: 37/1024 [MB] (17 MBps) [2024-10-30T17:22:10.547Z] Copying: 58/1024 [MB] (20 MBps) [2024-10-30T17:22:11.491Z] Copying: 74/1024 [MB] (15 MBps) [2024-10-30T17:22:12.438Z] Copying: 94/1024 [MB] (20 MBps) [2024-10-30T17:22:13.827Z] Copying: 113/1024 [MB] (19 MBps) [2024-10-30T17:22:14.776Z] Copying: 129/1024 [MB] (15 MBps) [2024-10-30T17:22:15.719Z] Copying: 150/1024 [MB] (20 MBps) [2024-10-30T17:22:16.686Z] Copying: 164/1024 [MB] (14 MBps) [2024-10-30T17:22:17.627Z] Copying: 182/1024 [MB] (17 MBps) [2024-10-30T17:22:18.571Z] Copying: 193/1024 [MB] (10 MBps) [2024-10-30T17:22:19.517Z] Copying: 205/1024 [MB] (12 MBps) [2024-10-30T17:22:20.460Z] Copying: 230/1024 [MB] (24 MBps) [2024-10-30T17:22:21.850Z] Copying: 243/1024 [MB] (12 MBps) [2024-10-30T17:22:22.796Z] Copying: 269/1024 [MB] (26 MBps) [2024-10-30T17:22:23.742Z] Copying: 287/1024 [MB] (18 MBps) [2024-10-30T17:22:24.686Z] Copying: 306/1024 [MB] (18 MBps) [2024-10-30T17:22:25.630Z] Copying: 326/1024 [MB] (20 MBps) [2024-10-30T17:22:26.574Z] Copying: 345/1024 [MB] (18 MBps) [2024-10-30T17:22:27.561Z] Copying: 361/1024 [MB] (16 MBps) [2024-10-30T17:22:28.519Z] Copying: 385/1024 [MB] (24 MBps) [2024-10-30T17:22:29.466Z] Copying: 403/1024 [MB] (18 MBps) [2024-10-30T17:22:30.853Z] Copying: 417/1024 [MB] (13 MBps) [2024-10-30T17:22:31.797Z] Copying: 441/1024 [MB] (24 MBps) [2024-10-30T17:22:32.742Z] Copying: 454/1024 [MB] (12 MBps) [2024-10-30T17:22:33.687Z] Copying: 472/1024 [MB] (17 MBps) [2024-10-30T17:22:34.633Z] Copying: 495/1024 [MB] (22 MBps) [2024-10-30T17:22:35.577Z] Copying: 510/1024 [MB] (15 MBps) [2024-10-30T17:22:36.523Z] Copying: 530/1024 [MB] (19 MBps) [2024-10-30T17:22:37.468Z] Copying: 552/1024 [MB] (21 MBps) [2024-10-30T17:22:38.851Z] Copying: 572/1024 [MB] (20 MBps) [2024-10-30T17:22:39.793Z] Copying: 592/1024 [MB] (19 MBps) [2024-10-30T17:22:40.735Z] Copying: 612/1024 [MB] (19 MBps) [2024-10-30T17:22:41.680Z] Copying: 629/1024 [MB] (17 MBps) [2024-10-30T17:22:42.624Z] Copying: 651/1024 [MB] (22 MBps) [2024-10-30T17:22:43.568Z] Copying: 673/1024 [MB] (21 MBps) [2024-10-30T17:22:44.511Z] Copying: 695/1024 [MB] (22 MBps) [2024-10-30T17:22:45.455Z] Copying: 711/1024 [MB] (16 MBps) [2024-10-30T17:22:46.843Z] Copying: 731/1024 [MB] (19 MBps) [2024-10-30T17:22:47.786Z] Copying: 761/1024 [MB] (29 MBps) [2024-10-30T17:22:48.730Z] Copying: 777/1024 [MB] (16 MBps) [2024-10-30T17:22:49.675Z] Copying: 796/1024 [MB] (18 MBps) [2024-10-30T17:22:50.618Z] Copying: 814/1024 [MB] (18 MBps) [2024-10-30T17:22:51.561Z] Copying: 831/1024 [MB] (17 MBps) [2024-10-30T17:22:52.504Z] Copying: 855/1024 [MB] (23 MBps) [2024-10-30T17:22:53.448Z] Copying: 878/1024 [MB] (22 MBps) [2024-10-30T17:22:54.835Z] Copying: 893/1024 [MB] (15 MBps) [2024-10-30T17:22:55.780Z] Copying: 909/1024 [MB] (15 MBps) [2024-10-30T17:22:56.724Z] Copying: 928/1024 [MB] (19 MBps) [2024-10-30T17:22:57.666Z] Copying: 943/1024 [MB] (15 MBps) [2024-10-30T17:22:58.609Z] Copying: 954/1024 [MB] (10 MBps) [2024-10-30T17:22:59.607Z] Copying: 978/1024 [MB] (23 MBps) [2024-10-30T17:23:00.549Z] Copying: 1000/1024 [MB] (22 MBps) [2024-10-30T17:23:00.550Z] Copying: 1022/1024 [MB] (22 MBps) [2024-10-30T17:23:00.813Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-10-30 17:23:00.648085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.832 [2024-10-30 17:23:00.648178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:17.832 [2024-10-30 17:23:00.648221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:17.832 [2024-10-30 17:23:00.648238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.832 [2024-10-30 17:23:00.648275] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:17.832 [2024-10-30 17:23:00.652976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.832 [2024-10-30 17:23:00.653029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:17.832 [2024-10-30 17:23:00.653047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.674 ms 00:19:17.832 [2024-10-30 17:23:00.653075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.832 [2024-10-30 17:23:00.653474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.832 [2024-10-30 17:23:00.653510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:17.832 [2024-10-30 17:23:00.653525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:19:17.832 [2024-10-30 17:23:00.653540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.832 [2024-10-30 17:23:00.658159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.832 [2024-10-30 17:23:00.658191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:17.832 [2024-10-30 17:23:00.658211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.596 ms 00:19:17.832 [2024-10-30 17:23:00.658220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.832 [2024-10-30 17:23:00.664401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.832 [2024-10-30 17:23:00.664438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:17.832 [2024-10-30 17:23:00.664449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.158 ms 00:19:17.832 [2024-10-30 17:23:00.664457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.832 [2024-10-30 17:23:00.684962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.832 [2024-10-30 17:23:00.685000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:17.832 [2024-10-30 17:23:00.685009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.445 ms 00:19:17.832 [2024-10-30 17:23:00.685016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.832 [2024-10-30 17:23:00.697276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.832 [2024-10-30 17:23:00.697310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:17.832 [2024-10-30 17:23:00.697321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.226 ms 00:19:17.832 [2024-10-30 17:23:00.697327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.832 [2024-10-30 17:23:00.697430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.832 [2024-10-30 17:23:00.697438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:17.832 [2024-10-30 17:23:00.697450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:17.832 [2024-10-30 17:23:00.697457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.832 [2024-10-30 17:23:00.715847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.832 [2024-10-30 17:23:00.715877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:17.832 [2024-10-30 17:23:00.715885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.379 ms 00:19:17.832 [2024-10-30 17:23:00.715890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.832 [2024-10-30 17:23:00.734223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.832 [2024-10-30 17:23:00.734258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:17.832 [2024-10-30 17:23:00.734265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.304 ms 00:19:17.832 [2024-10-30 17:23:00.734271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.832 [2024-10-30 17:23:00.751670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.833 [2024-10-30 17:23:00.751697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:17.833 [2024-10-30 17:23:00.751704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.372 ms 00:19:17.833 [2024-10-30 17:23:00.751710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.833 [2024-10-30 17:23:00.768891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.833 [2024-10-30 17:23:00.768917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:17.833 [2024-10-30 17:23:00.768924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.127 ms 00:19:17.833 [2024-10-30 17:23:00.768929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.833 [2024-10-30 17:23:00.768953] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:17.833 [2024-10-30 17:23:00.768964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.768975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.768981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.768987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.768992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.768998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:17.833 [2024-10-30 17:23:00.769462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:17.834 [2024-10-30 17:23:00.769468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:17.834 [2024-10-30 17:23:00.769474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:17.834 [2024-10-30 17:23:00.769479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:17.834 [2024-10-30 17:23:00.769485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:17.834 [2024-10-30 17:23:00.769490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:17.834 [2024-10-30 17:23:00.769495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:17.834 [2024-10-30 17:23:00.769501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:17.834 [2024-10-30 17:23:00.769508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:17.834 [2024-10-30 17:23:00.769513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:17.834 [2024-10-30 17:23:00.769519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:17.834 [2024-10-30 17:23:00.769525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:17.834 [2024-10-30 17:23:00.769530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:17.834 [2024-10-30 17:23:00.769536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:17.834 [2024-10-30 17:23:00.769542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:17.834 [2024-10-30 17:23:00.769553] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:17.834 [2024-10-30 17:23:00.769559] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6680e01b-326a-4063-9fcb-aad95718bcea 00:19:17.834 [2024-10-30 17:23:00.769567] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:17.834 [2024-10-30 17:23:00.769572] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:17.834 [2024-10-30 17:23:00.769578] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:17.834 [2024-10-30 17:23:00.769583] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:17.834 [2024-10-30 17:23:00.769588] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:17.834 [2024-10-30 17:23:00.769594] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:17.834 [2024-10-30 17:23:00.769603] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:17.834 [2024-10-30 17:23:00.769608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:17.834 [2024-10-30 17:23:00.769613] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:17.834 [2024-10-30 17:23:00.769618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.834 [2024-10-30 17:23:00.769623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:17.834 [2024-10-30 17:23:00.769630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:19:17.834 [2024-10-30 17:23:00.769635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.834 [2024-10-30 17:23:00.779098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.834 [2024-10-30 17:23:00.779122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:17.834 [2024-10-30 17:23:00.779130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.451 ms 00:19:17.834 [2024-10-30 17:23:00.779136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.834 [2024-10-30 17:23:00.779407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.834 [2024-10-30 17:23:00.779471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:17.834 [2024-10-30 17:23:00.779482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:19:17.834 [2024-10-30 17:23:00.779488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.834 [2024-10-30 17:23:00.805137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.834 [2024-10-30 17:23:00.805166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:17.834 [2024-10-30 17:23:00.805174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.834 [2024-10-30 17:23:00.805179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.834 [2024-10-30 17:23:00.805222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.834 [2024-10-30 17:23:00.805228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:17.834 [2024-10-30 17:23:00.805234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.834 [2024-10-30 17:23:00.805240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.834 [2024-10-30 17:23:00.805283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.834 [2024-10-30 17:23:00.805291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:17.834 [2024-10-30 17:23:00.805297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.834 [2024-10-30 17:23:00.805303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.834 [2024-10-30 17:23:00.805314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.834 [2024-10-30 17:23:00.805320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:17.834 [2024-10-30 17:23:00.805325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.834 [2024-10-30 17:23:00.805331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.096 [2024-10-30 17:23:00.864621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.096 [2024-10-30 17:23:00.864649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:18.096 [2024-10-30 17:23:00.864657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.096 [2024-10-30 17:23:00.864663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.096 [2024-10-30 17:23:00.912617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.096 [2024-10-30 17:23:00.912651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:18.096 [2024-10-30 17:23:00.912659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.096 [2024-10-30 17:23:00.912666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.096 [2024-10-30 17:23:00.912720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.096 [2024-10-30 17:23:00.912728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:18.096 [2024-10-30 17:23:00.912734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.096 [2024-10-30 17:23:00.912739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.096 [2024-10-30 17:23:00.912765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.096 [2024-10-30 17:23:00.912772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:18.096 [2024-10-30 17:23:00.912778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.096 [2024-10-30 17:23:00.912783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.096 [2024-10-30 17:23:00.912848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.096 [2024-10-30 17:23:00.912858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:18.096 [2024-10-30 17:23:00.912864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.096 [2024-10-30 17:23:00.912869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.096 [2024-10-30 17:23:00.912890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.096 [2024-10-30 17:23:00.912897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:18.096 [2024-10-30 17:23:00.912903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.096 [2024-10-30 17:23:00.912909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.096 [2024-10-30 17:23:00.912935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.096 [2024-10-30 17:23:00.912944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:18.096 [2024-10-30 17:23:00.912950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.096 [2024-10-30 17:23:00.912956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.096 [2024-10-30 17:23:00.912986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.096 [2024-10-30 17:23:00.912993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:18.096 [2024-10-30 17:23:00.912999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.096 [2024-10-30 17:23:00.913004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.096 [2024-10-30 17:23:00.913091] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 265.007 ms, result 0 00:19:18.668 00:19:18.668 00:19:18.668 17:23:01 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:20.586 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:20.847 17:23:03 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:20.847 [2024-10-30 17:23:03.632859] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:19:20.847 [2024-10-30 17:23:03.632973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75662 ] 00:19:20.847 [2024-10-30 17:23:03.790873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.107 [2024-10-30 17:23:03.865971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.107 [2024-10-30 17:23:04.069830] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:21.107 [2024-10-30 17:23:04.069882] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:21.369 [2024-10-30 17:23:04.217033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.369 [2024-10-30 17:23:04.217064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:21.369 [2024-10-30 17:23:04.217076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:21.369 [2024-10-30 17:23:04.217082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.369 [2024-10-30 17:23:04.217115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.369 [2024-10-30 17:23:04.217123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:21.369 [2024-10-30 17:23:04.217131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:21.369 [2024-10-30 17:23:04.217136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.369 [2024-10-30 17:23:04.217149] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:21.369 [2024-10-30 17:23:04.217691] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:21.369 [2024-10-30 17:23:04.217709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.369 [2024-10-30 17:23:04.217715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:21.370 [2024-10-30 17:23:04.217722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:19:21.370 [2024-10-30 17:23:04.217727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.370 [2024-10-30 17:23:04.218612] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:21.370 [2024-10-30 17:23:04.228443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.370 [2024-10-30 17:23:04.228467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:21.370 [2024-10-30 17:23:04.228475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.831 ms 00:19:21.370 [2024-10-30 17:23:04.228481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.370 [2024-10-30 17:23:04.228521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.370 [2024-10-30 17:23:04.228531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:21.370 [2024-10-30 17:23:04.228537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:21.370 [2024-10-30 17:23:04.228543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.370 [2024-10-30 17:23:04.232794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.370 [2024-10-30 17:23:04.232816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:21.370 [2024-10-30 17:23:04.232822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.215 ms 00:19:21.370 [2024-10-30 17:23:04.232828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.370 [2024-10-30 17:23:04.232883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.370 [2024-10-30 17:23:04.232890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:21.370 [2024-10-30 17:23:04.232897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:21.370 [2024-10-30 17:23:04.232902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.370 [2024-10-30 17:23:04.232940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.370 [2024-10-30 17:23:04.232948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:21.370 [2024-10-30 17:23:04.232954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:21.370 [2024-10-30 17:23:04.232960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.370 [2024-10-30 17:23:04.232974] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:21.370 [2024-10-30 17:23:04.235554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.370 [2024-10-30 17:23:04.235574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:21.370 [2024-10-30 17:23:04.235581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.584 ms 00:19:21.370 [2024-10-30 17:23:04.235589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.370 [2024-10-30 17:23:04.235613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.370 [2024-10-30 17:23:04.235620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:21.370 [2024-10-30 17:23:04.235625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:21.370 [2024-10-30 17:23:04.235631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.370 [2024-10-30 17:23:04.235644] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:21.370 [2024-10-30 17:23:04.235657] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:21.370 [2024-10-30 17:23:04.235683] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:21.370 [2024-10-30 17:23:04.235696] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:21.370 [2024-10-30 17:23:04.235774] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:21.370 [2024-10-30 17:23:04.235786] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:21.370 [2024-10-30 17:23:04.235794] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:21.370 [2024-10-30 17:23:04.235802] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:21.370 [2024-10-30 17:23:04.235808] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:21.370 [2024-10-30 17:23:04.235814] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:21.370 [2024-10-30 17:23:04.235820] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:21.370 [2024-10-30 17:23:04.235826] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:21.370 [2024-10-30 17:23:04.235831] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:21.370 [2024-10-30 17:23:04.235839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.370 [2024-10-30 17:23:04.235845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:21.370 [2024-10-30 17:23:04.235850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:19:21.370 [2024-10-30 17:23:04.235856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.370 [2024-10-30 17:23:04.235918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.370 [2024-10-30 17:23:04.235924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:21.370 [2024-10-30 17:23:04.235929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:21.370 [2024-10-30 17:23:04.235934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.370 [2024-10-30 17:23:04.236008] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:21.370 [2024-10-30 17:23:04.236021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:21.370 [2024-10-30 17:23:04.236027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:21.370 [2024-10-30 17:23:04.236032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:21.370 [2024-10-30 17:23:04.236038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:21.370 [2024-10-30 17:23:04.236043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:21.370 [2024-10-30 17:23:04.236049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:21.370 [2024-10-30 17:23:04.236055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:21.370 [2024-10-30 17:23:04.236060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:21.370 [2024-10-30 17:23:04.236065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:21.370 [2024-10-30 17:23:04.236070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:21.370 [2024-10-30 17:23:04.236076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:21.370 [2024-10-30 17:23:04.236081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:21.370 [2024-10-30 17:23:04.236085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:21.370 [2024-10-30 17:23:04.236090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:21.370 [2024-10-30 17:23:04.236099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:21.370 [2024-10-30 17:23:04.236104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:21.370 [2024-10-30 17:23:04.236109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:21.370 [2024-10-30 17:23:04.236114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:21.370 [2024-10-30 17:23:04.236119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:21.370 [2024-10-30 17:23:04.236124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:21.370 [2024-10-30 17:23:04.236129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:21.370 [2024-10-30 17:23:04.236133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:21.370 [2024-10-30 17:23:04.236138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:21.370 [2024-10-30 17:23:04.236143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:21.370 [2024-10-30 17:23:04.236148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:21.370 [2024-10-30 17:23:04.236152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:21.370 [2024-10-30 17:23:04.236157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:21.370 [2024-10-30 17:23:04.236162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:21.370 [2024-10-30 17:23:04.236167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:21.370 [2024-10-30 17:23:04.236172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:21.370 [2024-10-30 17:23:04.236176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:21.370 [2024-10-30 17:23:04.236181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:21.370 [2024-10-30 17:23:04.236186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:21.370 [2024-10-30 17:23:04.236191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:21.370 [2024-10-30 17:23:04.236196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:21.370 [2024-10-30 17:23:04.236211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:21.370 [2024-10-30 17:23:04.236217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:21.370 [2024-10-30 17:23:04.236224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:21.370 [2024-10-30 17:23:04.236229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:21.370 [2024-10-30 17:23:04.236235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:21.370 [2024-10-30 17:23:04.236240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:21.370 [2024-10-30 17:23:04.236245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:21.370 [2024-10-30 17:23:04.236250] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:21.370 [2024-10-30 17:23:04.236255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:21.370 [2024-10-30 17:23:04.236261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:21.370 [2024-10-30 17:23:04.236266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:21.370 [2024-10-30 17:23:04.236272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:21.370 [2024-10-30 17:23:04.236277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:21.370 [2024-10-30 17:23:04.236282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:21.370 [2024-10-30 17:23:04.236288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:21.370 [2024-10-30 17:23:04.236293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:21.370 [2024-10-30 17:23:04.236298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:21.370 [2024-10-30 17:23:04.236304] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:21.370 [2024-10-30 17:23:04.236311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:21.370 [2024-10-30 17:23:04.236317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:21.371 [2024-10-30 17:23:04.236323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:21.371 [2024-10-30 17:23:04.236328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:21.371 [2024-10-30 17:23:04.236334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:21.371 [2024-10-30 17:23:04.236339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:21.371 [2024-10-30 17:23:04.236345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:21.371 [2024-10-30 17:23:04.236350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:21.371 [2024-10-30 17:23:04.236355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:21.371 [2024-10-30 17:23:04.236361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:21.371 [2024-10-30 17:23:04.236366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:21.371 [2024-10-30 17:23:04.236372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:21.371 [2024-10-30 17:23:04.236377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:21.371 [2024-10-30 17:23:04.236383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:21.371 [2024-10-30 17:23:04.236388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:21.371 [2024-10-30 17:23:04.236393] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:21.371 [2024-10-30 17:23:04.236402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:21.371 [2024-10-30 17:23:04.236410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:21.371 [2024-10-30 17:23:04.236415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:21.371 [2024-10-30 17:23:04.236421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:21.371 [2024-10-30 17:23:04.236426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:21.371 [2024-10-30 17:23:04.236432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.371 [2024-10-30 17:23:04.236437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:21.371 [2024-10-30 17:23:04.236443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:19:21.371 [2024-10-30 17:23:04.236448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.371 [2024-10-30 17:23:04.257059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.371 [2024-10-30 17:23:04.257082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:21.371 [2024-10-30 17:23:04.257090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.580 ms 00:19:21.371 [2024-10-30 17:23:04.257096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.371 [2024-10-30 17:23:04.257158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.371 [2024-10-30 17:23:04.257167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:21.371 [2024-10-30 17:23:04.257174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:21.371 [2024-10-30 17:23:04.257179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.371 [2024-10-30 17:23:04.299946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.371 [2024-10-30 17:23:04.299975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:21.371 [2024-10-30 17:23:04.299985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.721 ms 00:19:21.371 [2024-10-30 17:23:04.299991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.371 [2024-10-30 17:23:04.300022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.371 [2024-10-30 17:23:04.300029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:21.371 [2024-10-30 17:23:04.300036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:21.371 [2024-10-30 17:23:04.300044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.371 [2024-10-30 17:23:04.300365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.371 [2024-10-30 17:23:04.300382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:21.371 [2024-10-30 17:23:04.300389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:19:21.371 [2024-10-30 17:23:04.300395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.371 [2024-10-30 17:23:04.300492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.371 [2024-10-30 17:23:04.300499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:21.371 [2024-10-30 17:23:04.300505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:19:21.371 [2024-10-30 17:23:04.300511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.371 [2024-10-30 17:23:04.310842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.371 [2024-10-30 17:23:04.310864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:21.371 [2024-10-30 17:23:04.310872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.313 ms 00:19:21.371 [2024-10-30 17:23:04.310878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.371 [2024-10-30 17:23:04.320570] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:21.371 [2024-10-30 17:23:04.320598] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:21.371 [2024-10-30 17:23:04.320607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.371 [2024-10-30 17:23:04.320613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:21.371 [2024-10-30 17:23:04.320620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.648 ms 00:19:21.371 [2024-10-30 17:23:04.320625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.371 [2024-10-30 17:23:04.339180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.371 [2024-10-30 17:23:04.339212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:21.371 [2024-10-30 17:23:04.339221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.527 ms 00:19:21.371 [2024-10-30 17:23:04.339228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.371 [2024-10-30 17:23:04.348129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.371 [2024-10-30 17:23:04.348151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:21.371 [2024-10-30 17:23:04.348158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.874 ms 00:19:21.371 [2024-10-30 17:23:04.348163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.632 [2024-10-30 17:23:04.356734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.632 [2024-10-30 17:23:04.356756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:21.632 [2024-10-30 17:23:04.356763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.546 ms 00:19:21.632 [2024-10-30 17:23:04.356768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.632 [2024-10-30 17:23:04.357226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.632 [2024-10-30 17:23:04.357243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:21.632 [2024-10-30 17:23:04.357250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:19:21.632 [2024-10-30 17:23:04.357256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.632 [2024-10-30 17:23:04.400884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.632 [2024-10-30 17:23:04.400915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:21.632 [2024-10-30 17:23:04.400925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.613 ms 00:19:21.632 [2024-10-30 17:23:04.400935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.632 [2024-10-30 17:23:04.408560] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:21.632 [2024-10-30 17:23:04.410338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.632 [2024-10-30 17:23:04.410358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:21.632 [2024-10-30 17:23:04.410366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.373 ms 00:19:21.632 [2024-10-30 17:23:04.410373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.632 [2024-10-30 17:23:04.410425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.632 [2024-10-30 17:23:04.410434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:21.632 [2024-10-30 17:23:04.410441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:21.632 [2024-10-30 17:23:04.410447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.632 [2024-10-30 17:23:04.410490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.632 [2024-10-30 17:23:04.410498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:21.632 [2024-10-30 17:23:04.410505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:21.632 [2024-10-30 17:23:04.410512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.632 [2024-10-30 17:23:04.410528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.632 [2024-10-30 17:23:04.410535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:21.632 [2024-10-30 17:23:04.410542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:21.632 [2024-10-30 17:23:04.410548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.632 [2024-10-30 17:23:04.410573] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:21.632 [2024-10-30 17:23:04.410582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.632 [2024-10-30 17:23:04.410589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:21.632 [2024-10-30 17:23:04.410596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:21.632 [2024-10-30 17:23:04.410602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.632 [2024-10-30 17:23:04.427827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.632 [2024-10-30 17:23:04.427850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:21.632 [2024-10-30 17:23:04.427858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.212 ms 00:19:21.632 [2024-10-30 17:23:04.427864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.633 [2024-10-30 17:23:04.427920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.633 [2024-10-30 17:23:04.427928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:21.633 [2024-10-30 17:23:04.427934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:21.633 [2024-10-30 17:23:04.427940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.633 [2024-10-30 17:23:04.428740] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 211.392 ms, result 0 00:19:22.573  [2024-10-30T17:23:06.498Z] Copying: 29/1024 [MB] (29 MBps) [2024-10-30T17:23:07.883Z] Copying: 63/1024 [MB] (34 MBps) [2024-10-30T17:23:08.454Z] Copying: 98/1024 [MB] (34 MBps) [2024-10-30T17:23:09.844Z] Copying: 126/1024 [MB] (28 MBps) [2024-10-30T17:23:10.782Z] Copying: 146/1024 [MB] (20 MBps) [2024-10-30T17:23:11.721Z] Copying: 159/1024 [MB] (12 MBps) [2024-10-30T17:23:12.718Z] Copying: 181/1024 [MB] (22 MBps) [2024-10-30T17:23:13.657Z] Copying: 202/1024 [MB] (21 MBps) [2024-10-30T17:23:14.599Z] Copying: 222/1024 [MB] (19 MBps) [2024-10-30T17:23:15.544Z] Copying: 241/1024 [MB] (18 MBps) [2024-10-30T17:23:16.486Z] Copying: 260/1024 [MB] (19 MBps) [2024-10-30T17:23:17.871Z] Copying: 278/1024 [MB] (17 MBps) [2024-10-30T17:23:18.442Z] Copying: 307/1024 [MB] (29 MBps) [2024-10-30T17:23:19.831Z] Copying: 334/1024 [MB] (27 MBps) [2024-10-30T17:23:20.771Z] Copying: 348/1024 [MB] (13 MBps) [2024-10-30T17:23:21.716Z] Copying: 378/1024 [MB] (30 MBps) [2024-10-30T17:23:22.659Z] Copying: 394/1024 [MB] (16 MBps) [2024-10-30T17:23:23.603Z] Copying: 416/1024 [MB] (22 MBps) [2024-10-30T17:23:24.548Z] Copying: 433/1024 [MB] (17 MBps) [2024-10-30T17:23:25.494Z] Copying: 449/1024 [MB] (15 MBps) [2024-10-30T17:23:26.453Z] Copying: 476/1024 [MB] (27 MBps) [2024-10-30T17:23:27.841Z] Copying: 493/1024 [MB] (17 MBps) [2024-10-30T17:23:28.782Z] Copying: 527/1024 [MB] (33 MBps) [2024-10-30T17:23:29.773Z] Copying: 561/1024 [MB] (33 MBps) [2024-10-30T17:23:30.751Z] Copying: 586/1024 [MB] (25 MBps) [2024-10-30T17:23:31.695Z] Copying: 608/1024 [MB] (22 MBps) [2024-10-30T17:23:32.640Z] Copying: 629/1024 [MB] (20 MBps) [2024-10-30T17:23:33.586Z] Copying: 650/1024 [MB] (21 MBps) [2024-10-30T17:23:34.529Z] Copying: 689/1024 [MB] (38 MBps) [2024-10-30T17:23:35.470Z] Copying: 717/1024 [MB] (28 MBps) [2024-10-30T17:23:36.858Z] Copying: 743/1024 [MB] (25 MBps) [2024-10-30T17:23:37.800Z] Copying: 766/1024 [MB] (22 MBps) [2024-10-30T17:23:38.745Z] Copying: 787/1024 [MB] (21 MBps) [2024-10-30T17:23:39.689Z] Copying: 806/1024 [MB] (18 MBps) [2024-10-30T17:23:40.628Z] Copying: 825/1024 [MB] (19 MBps) [2024-10-30T17:23:41.570Z] Copying: 845/1024 [MB] (19 MBps) [2024-10-30T17:23:42.512Z] Copying: 866/1024 [MB] (21 MBps) [2024-10-30T17:23:43.456Z] Copying: 887/1024 [MB] (21 MBps) [2024-10-30T17:23:44.837Z] Copying: 909/1024 [MB] (21 MBps) [2024-10-30T17:23:45.781Z] Copying: 924/1024 [MB] (14 MBps) [2024-10-30T17:23:46.729Z] Copying: 939/1024 [MB] (14 MBps) [2024-10-30T17:23:47.671Z] Copying: 957/1024 [MB] (18 MBps) [2024-10-30T17:23:48.615Z] Copying: 983/1024 [MB] (26 MBps) [2024-10-30T17:23:49.560Z] Copying: 1012/1024 [MB] (29 MBps) [2024-10-30T17:23:49.823Z] Copying: 1023/1024 [MB] (10 MBps) [2024-10-30T17:23:49.823Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-10-30 17:23:49.817468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.842 [2024-10-30 17:23:49.817699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:06.842 [2024-10-30 17:23:49.818013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:06.842 [2024-10-30 17:23:49.818035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.842 [2024-10-30 17:23:49.820801] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:07.103 [2024-10-30 17:23:49.825048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.103 [2024-10-30 17:23:49.825228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:07.103 [2024-10-30 17:23:49.825249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.068 ms 00:20:07.103 [2024-10-30 17:23:49.825259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.103 [2024-10-30 17:23:49.838844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.103 [2024-10-30 17:23:49.838899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:07.103 [2024-10-30 17:23:49.838911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.911 ms 00:20:07.103 [2024-10-30 17:23:49.838920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.103 [2024-10-30 17:23:49.862614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.103 [2024-10-30 17:23:49.862673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:07.103 [2024-10-30 17:23:49.862685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.669 ms 00:20:07.103 [2024-10-30 17:23:49.862694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.103 [2024-10-30 17:23:49.868876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.103 [2024-10-30 17:23:49.868917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:07.103 [2024-10-30 17:23:49.868928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.154 ms 00:20:07.103 [2024-10-30 17:23:49.868936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.103 [2024-10-30 17:23:49.895362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.103 [2024-10-30 17:23:49.895413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:07.103 [2024-10-30 17:23:49.895426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.373 ms 00:20:07.103 [2024-10-30 17:23:49.895434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.103 [2024-10-30 17:23:49.911617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.103 [2024-10-30 17:23:49.911661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:07.103 [2024-10-30 17:23:49.911680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.137 ms 00:20:07.104 [2024-10-30 17:23:49.911689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.104 [2024-10-30 17:23:50.083184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.104 [2024-10-30 17:23:50.083281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:07.104 [2024-10-30 17:23:50.083296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 171.444 ms 00:20:07.104 [2024-10-30 17:23:50.083305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.364 [2024-10-30 17:23:50.110163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.364 [2024-10-30 17:23:50.110232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:07.364 [2024-10-30 17:23:50.110247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.842 ms 00:20:07.364 [2024-10-30 17:23:50.110255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.364 [2024-10-30 17:23:50.135805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.364 [2024-10-30 17:23:50.135861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:07.364 [2024-10-30 17:23:50.135873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.503 ms 00:20:07.364 [2024-10-30 17:23:50.135881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.364 [2024-10-30 17:23:50.160714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.364 [2024-10-30 17:23:50.160763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:07.364 [2024-10-30 17:23:50.160775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.788 ms 00:20:07.364 [2024-10-30 17:23:50.160782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.364 [2024-10-30 17:23:50.185261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.364 [2024-10-30 17:23:50.185307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:07.364 [2024-10-30 17:23:50.185318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.395 ms 00:20:07.364 [2024-10-30 17:23:50.185326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.364 [2024-10-30 17:23:50.185369] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:07.364 [2024-10-30 17:23:50.185385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 91904 / 261120 wr_cnt: 1 state: open 00:20:07.364 [2024-10-30 17:23:50.185396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:07.364 [2024-10-30 17:23:50.185651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.185995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:07.365 [2024-10-30 17:23:50.186215] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:07.365 [2024-10-30 17:23:50.186225] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6680e01b-326a-4063-9fcb-aad95718bcea 00:20:07.365 [2024-10-30 17:23:50.186234] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 91904 00:20:07.365 [2024-10-30 17:23:50.186242] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 92864 00:20:07.365 [2024-10-30 17:23:50.186249] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 91904 00:20:07.365 [2024-10-30 17:23:50.186259] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0104 00:20:07.365 [2024-10-30 17:23:50.186267] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:07.365 [2024-10-30 17:23:50.186277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:07.365 [2024-10-30 17:23:50.186298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:07.365 [2024-10-30 17:23:50.186305] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:07.365 [2024-10-30 17:23:50.186312] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:07.365 [2024-10-30 17:23:50.186320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.365 [2024-10-30 17:23:50.186329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:07.365 [2024-10-30 17:23:50.186338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:20:07.365 [2024-10-30 17:23:50.186346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.365 [2024-10-30 17:23:50.200034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.365 [2024-10-30 17:23:50.200078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:07.365 [2024-10-30 17:23:50.200091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.668 ms 00:20:07.365 [2024-10-30 17:23:50.200099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.365 [2024-10-30 17:23:50.200535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.365 [2024-10-30 17:23:50.200554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:07.365 [2024-10-30 17:23:50.200564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:20:07.365 [2024-10-30 17:23:50.200572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.365 [2024-10-30 17:23:50.236664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.365 [2024-10-30 17:23:50.236711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:07.365 [2024-10-30 17:23:50.236729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.365 [2024-10-30 17:23:50.236738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.365 [2024-10-30 17:23:50.236810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.365 [2024-10-30 17:23:50.236818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:07.365 [2024-10-30 17:23:50.236827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.365 [2024-10-30 17:23:50.236834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.366 [2024-10-30 17:23:50.236914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.366 [2024-10-30 17:23:50.236926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:07.366 [2024-10-30 17:23:50.236934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.366 [2024-10-30 17:23:50.236947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.366 [2024-10-30 17:23:50.236963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.366 [2024-10-30 17:23:50.236971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:07.366 [2024-10-30 17:23:50.236979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.366 [2024-10-30 17:23:50.236987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.366 [2024-10-30 17:23:50.319762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.366 [2024-10-30 17:23:50.319818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:07.366 [2024-10-30 17:23:50.319832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.366 [2024-10-30 17:23:50.319847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.626 [2024-10-30 17:23:50.389243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.626 [2024-10-30 17:23:50.389291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:07.626 [2024-10-30 17:23:50.389303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.626 [2024-10-30 17:23:50.389312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.626 [2024-10-30 17:23:50.389371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.626 [2024-10-30 17:23:50.389381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:07.626 [2024-10-30 17:23:50.389390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.626 [2024-10-30 17:23:50.389399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.626 [2024-10-30 17:23:50.389462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.626 [2024-10-30 17:23:50.389472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:07.626 [2024-10-30 17:23:50.389482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.626 [2024-10-30 17:23:50.389490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.626 [2024-10-30 17:23:50.389583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.626 [2024-10-30 17:23:50.389594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:07.626 [2024-10-30 17:23:50.389603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.627 [2024-10-30 17:23:50.389612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.627 [2024-10-30 17:23:50.389641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.627 [2024-10-30 17:23:50.389654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:07.627 [2024-10-30 17:23:50.389663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.627 [2024-10-30 17:23:50.389671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.627 [2024-10-30 17:23:50.389709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.627 [2024-10-30 17:23:50.389719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:07.627 [2024-10-30 17:23:50.389728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.627 [2024-10-30 17:23:50.389735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.627 [2024-10-30 17:23:50.389789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.627 [2024-10-30 17:23:50.389799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:07.627 [2024-10-30 17:23:50.389808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.627 [2024-10-30 17:23:50.389816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.627 [2024-10-30 17:23:50.389974] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 572.475 ms, result 0 00:20:08.569 00:20:08.569 00:20:08.569 17:23:51 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:08.830 [2024-10-30 17:23:51.609330] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:20:08.830 [2024-10-30 17:23:51.609485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76157 ] 00:20:08.830 [2024-10-30 17:23:51.772602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.091 [2024-10-30 17:23:51.892444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.352 [2024-10-30 17:23:52.180773] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:09.352 [2024-10-30 17:23:52.180852] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:09.615 [2024-10-30 17:23:52.341793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.615 [2024-10-30 17:23:52.341877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:09.615 [2024-10-30 17:23:52.341897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:09.615 [2024-10-30 17:23:52.341907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.615 [2024-10-30 17:23:52.341963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.615 [2024-10-30 17:23:52.341974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:09.615 [2024-10-30 17:23:52.341985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:09.615 [2024-10-30 17:23:52.341993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.615 [2024-10-30 17:23:52.342015] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:09.615 [2024-10-30 17:23:52.342831] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:09.615 [2024-10-30 17:23:52.342862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.615 [2024-10-30 17:23:52.342870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:09.615 [2024-10-30 17:23:52.342879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.853 ms 00:20:09.615 [2024-10-30 17:23:52.342887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.615 [2024-10-30 17:23:52.344574] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:09.615 [2024-10-30 17:23:52.358433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.615 [2024-10-30 17:23:52.358487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:09.615 [2024-10-30 17:23:52.358500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.861 ms 00:20:09.615 [2024-10-30 17:23:52.358507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.615 [2024-10-30 17:23:52.358583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.615 [2024-10-30 17:23:52.358597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:09.615 [2024-10-30 17:23:52.358606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:09.615 [2024-10-30 17:23:52.358614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.615 [2024-10-30 17:23:52.366529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.615 [2024-10-30 17:23:52.366570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:09.615 [2024-10-30 17:23:52.366581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.839 ms 00:20:09.615 [2024-10-30 17:23:52.366588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.615 [2024-10-30 17:23:52.366669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.615 [2024-10-30 17:23:52.366679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:09.615 [2024-10-30 17:23:52.366686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:09.615 [2024-10-30 17:23:52.366694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.615 [2024-10-30 17:23:52.366737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.615 [2024-10-30 17:23:52.366747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:09.615 [2024-10-30 17:23:52.366756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:09.615 [2024-10-30 17:23:52.366764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.615 [2024-10-30 17:23:52.366787] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:09.615 [2024-10-30 17:23:52.370759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.615 [2024-10-30 17:23:52.370796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:09.615 [2024-10-30 17:23:52.370806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.977 ms 00:20:09.615 [2024-10-30 17:23:52.370816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.615 [2024-10-30 17:23:52.370851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.615 [2024-10-30 17:23:52.370860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:09.615 [2024-10-30 17:23:52.370868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:09.615 [2024-10-30 17:23:52.370876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.615 [2024-10-30 17:23:52.370927] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:09.615 [2024-10-30 17:23:52.370949] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:09.615 [2024-10-30 17:23:52.370986] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:09.615 [2024-10-30 17:23:52.371005] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:09.615 [2024-10-30 17:23:52.371112] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:09.615 [2024-10-30 17:23:52.371123] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:09.615 [2024-10-30 17:23:52.371134] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:09.615 [2024-10-30 17:23:52.371145] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:09.615 [2024-10-30 17:23:52.371154] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:09.615 [2024-10-30 17:23:52.371162] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:09.615 [2024-10-30 17:23:52.371170] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:09.615 [2024-10-30 17:23:52.371180] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:09.615 [2024-10-30 17:23:52.371187] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:09.615 [2024-10-30 17:23:52.371214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.615 [2024-10-30 17:23:52.371223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:09.615 [2024-10-30 17:23:52.371231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:20:09.615 [2024-10-30 17:23:52.371238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.615 [2024-10-30 17:23:52.371321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.615 [2024-10-30 17:23:52.371330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:09.615 [2024-10-30 17:23:52.371338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:09.615 [2024-10-30 17:23:52.371345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.615 [2024-10-30 17:23:52.371448] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:09.615 [2024-10-30 17:23:52.371469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:09.615 [2024-10-30 17:23:52.371478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:09.615 [2024-10-30 17:23:52.371486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.615 [2024-10-30 17:23:52.371494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:09.615 [2024-10-30 17:23:52.371500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:09.615 [2024-10-30 17:23:52.371507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:09.615 [2024-10-30 17:23:52.371514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:09.615 [2024-10-30 17:23:52.371520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:09.615 [2024-10-30 17:23:52.371527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:09.615 [2024-10-30 17:23:52.371535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:09.616 [2024-10-30 17:23:52.371542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:09.616 [2024-10-30 17:23:52.371549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:09.616 [2024-10-30 17:23:52.371556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:09.616 [2024-10-30 17:23:52.371563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:09.616 [2024-10-30 17:23:52.371577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.616 [2024-10-30 17:23:52.371584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:09.616 [2024-10-30 17:23:52.371590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:09.616 [2024-10-30 17:23:52.371597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.616 [2024-10-30 17:23:52.371603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:09.616 [2024-10-30 17:23:52.371610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:09.616 [2024-10-30 17:23:52.371617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:09.616 [2024-10-30 17:23:52.371623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:09.616 [2024-10-30 17:23:52.371630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:09.616 [2024-10-30 17:23:52.371637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:09.616 [2024-10-30 17:23:52.371643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:09.616 [2024-10-30 17:23:52.371650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:09.616 [2024-10-30 17:23:52.371656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:09.616 [2024-10-30 17:23:52.371662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:09.616 [2024-10-30 17:23:52.371668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:09.616 [2024-10-30 17:23:52.371674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:09.616 [2024-10-30 17:23:52.371680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:09.616 [2024-10-30 17:23:52.371687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:09.616 [2024-10-30 17:23:52.371693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:09.616 [2024-10-30 17:23:52.371700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:09.616 [2024-10-30 17:23:52.371706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:09.616 [2024-10-30 17:23:52.371712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:09.616 [2024-10-30 17:23:52.371720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:09.616 [2024-10-30 17:23:52.371727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:09.616 [2024-10-30 17:23:52.371733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.616 [2024-10-30 17:23:52.371743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:09.616 [2024-10-30 17:23:52.371749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:09.616 [2024-10-30 17:23:52.371757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.616 [2024-10-30 17:23:52.371764] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:09.616 [2024-10-30 17:23:52.371772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:09.616 [2024-10-30 17:23:52.371779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:09.616 [2024-10-30 17:23:52.371787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.616 [2024-10-30 17:23:52.371794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:09.616 [2024-10-30 17:23:52.371801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:09.616 [2024-10-30 17:23:52.371808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:09.616 [2024-10-30 17:23:52.371815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:09.616 [2024-10-30 17:23:52.371822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:09.616 [2024-10-30 17:23:52.371829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:09.616 [2024-10-30 17:23:52.371837] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:09.616 [2024-10-30 17:23:52.371846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:09.616 [2024-10-30 17:23:52.371855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:09.616 [2024-10-30 17:23:52.371862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:09.616 [2024-10-30 17:23:52.371869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:09.616 [2024-10-30 17:23:52.371876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:09.616 [2024-10-30 17:23:52.371883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:09.616 [2024-10-30 17:23:52.371890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:09.616 [2024-10-30 17:23:52.371896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:09.616 [2024-10-30 17:23:52.371903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:09.616 [2024-10-30 17:23:52.371910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:09.616 [2024-10-30 17:23:52.371917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:09.616 [2024-10-30 17:23:52.371924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:09.616 [2024-10-30 17:23:52.371930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:09.616 [2024-10-30 17:23:52.371937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:09.616 [2024-10-30 17:23:52.371944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:09.616 [2024-10-30 17:23:52.371952] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:09.616 [2024-10-30 17:23:52.371960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:09.616 [2024-10-30 17:23:52.371970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:09.616 [2024-10-30 17:23:52.371976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:09.616 [2024-10-30 17:23:52.371983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:09.616 [2024-10-30 17:23:52.371995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:09.616 [2024-10-30 17:23:52.372003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.616 [2024-10-30 17:23:52.372010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:09.616 [2024-10-30 17:23:52.372018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:20:09.616 [2024-10-30 17:23:52.372026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.616 [2024-10-30 17:23:52.403479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.616 [2024-10-30 17:23:52.403529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:09.616 [2024-10-30 17:23:52.403540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.409 ms 00:20:09.616 [2024-10-30 17:23:52.403548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.616 [2024-10-30 17:23:52.403640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.616 [2024-10-30 17:23:52.403654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:09.616 [2024-10-30 17:23:52.403664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:09.616 [2024-10-30 17:23:52.403672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.616 [2024-10-30 17:23:52.447939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.616 [2024-10-30 17:23:52.447994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:09.616 [2024-10-30 17:23:52.448007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.210 ms 00:20:09.616 [2024-10-30 17:23:52.448015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.616 [2024-10-30 17:23:52.448063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.616 [2024-10-30 17:23:52.448073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:09.616 [2024-10-30 17:23:52.448082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:09.616 [2024-10-30 17:23:52.448094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.616 [2024-10-30 17:23:52.448697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.616 [2024-10-30 17:23:52.448733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:09.616 [2024-10-30 17:23:52.448743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:20:09.616 [2024-10-30 17:23:52.448751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.616 [2024-10-30 17:23:52.448900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.616 [2024-10-30 17:23:52.448910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:09.616 [2024-10-30 17:23:52.448919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:20:09.616 [2024-10-30 17:23:52.448926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.616 [2024-10-30 17:23:52.464402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.616 [2024-10-30 17:23:52.464448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:09.616 [2024-10-30 17:23:52.464459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.451 ms 00:20:09.616 [2024-10-30 17:23:52.464470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.616 [2024-10-30 17:23:52.478670] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:09.616 [2024-10-30 17:23:52.478716] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:09.616 [2024-10-30 17:23:52.478729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.616 [2024-10-30 17:23:52.478738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:09.616 [2024-10-30 17:23:52.478747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.153 ms 00:20:09.616 [2024-10-30 17:23:52.478754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.616 [2024-10-30 17:23:52.505140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.616 [2024-10-30 17:23:52.505207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:09.616 [2024-10-30 17:23:52.505219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.334 ms 00:20:09.616 [2024-10-30 17:23:52.505227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.616 [2024-10-30 17:23:52.518272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.617 [2024-10-30 17:23:52.518327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:09.617 [2024-10-30 17:23:52.518338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.009 ms 00:20:09.617 [2024-10-30 17:23:52.518345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.617 [2024-10-30 17:23:52.530901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.617 [2024-10-30 17:23:52.530949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:09.617 [2024-10-30 17:23:52.530960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.511 ms 00:20:09.617 [2024-10-30 17:23:52.530967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.617 [2024-10-30 17:23:52.531615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.617 [2024-10-30 17:23:52.531645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:09.617 [2024-10-30 17:23:52.531655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:20:09.617 [2024-10-30 17:23:52.531663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.879 [2024-10-30 17:23:52.598590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.879 [2024-10-30 17:23:52.598648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:09.879 [2024-10-30 17:23:52.598663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.904 ms 00:20:09.879 [2024-10-30 17:23:52.598678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.879 [2024-10-30 17:23:52.609722] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:09.879 [2024-10-30 17:23:52.612862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.879 [2024-10-30 17:23:52.612904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:09.879 [2024-10-30 17:23:52.612915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.131 ms 00:20:09.879 [2024-10-30 17:23:52.612923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.879 [2024-10-30 17:23:52.613011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.879 [2024-10-30 17:23:52.613023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:09.879 [2024-10-30 17:23:52.613032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:09.879 [2024-10-30 17:23:52.613040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.879 [2024-10-30 17:23:52.614630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.879 [2024-10-30 17:23:52.614672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:09.879 [2024-10-30 17:23:52.614683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.549 ms 00:20:09.879 [2024-10-30 17:23:52.614692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.879 [2024-10-30 17:23:52.614720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.879 [2024-10-30 17:23:52.614729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:09.879 [2024-10-30 17:23:52.614737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:09.879 [2024-10-30 17:23:52.614745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.879 [2024-10-30 17:23:52.614787] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:09.879 [2024-10-30 17:23:52.614800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.879 [2024-10-30 17:23:52.614809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:09.879 [2024-10-30 17:23:52.614817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:09.879 [2024-10-30 17:23:52.614826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.879 [2024-10-30 17:23:52.640844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.879 [2024-10-30 17:23:52.640891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:09.879 [2024-10-30 17:23:52.640904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.999 ms 00:20:09.879 [2024-10-30 17:23:52.640913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.879 [2024-10-30 17:23:52.641005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.879 [2024-10-30 17:23:52.641015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:09.879 [2024-10-30 17:23:52.641024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:09.879 [2024-10-30 17:23:52.641032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.879 [2024-10-30 17:23:52.642452] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 300.151 ms, result 0 00:20:11.266  [2024-10-30T17:23:55.191Z] Copying: 10/1024 [MB] (10 MBps) [2024-10-30T17:23:56.136Z] Copying: 28/1024 [MB] (18 MBps) [2024-10-30T17:23:57.078Z] Copying: 48/1024 [MB] (19 MBps) [2024-10-30T17:23:58.023Z] Copying: 71/1024 [MB] (23 MBps) [2024-10-30T17:23:58.966Z] Copying: 90/1024 [MB] (19 MBps) [2024-10-30T17:23:59.910Z] Copying: 112/1024 [MB] (22 MBps) [2024-10-30T17:24:00.939Z] Copying: 128/1024 [MB] (15 MBps) [2024-10-30T17:24:01.930Z] Copying: 143/1024 [MB] (15 MBps) [2024-10-30T17:24:02.875Z] Copying: 163/1024 [MB] (20 MBps) [2024-10-30T17:24:04.263Z] Copying: 184/1024 [MB] (20 MBps) [2024-10-30T17:24:04.837Z] Copying: 198/1024 [MB] (13 MBps) [2024-10-30T17:24:06.220Z] Copying: 217/1024 [MB] (19 MBps) [2024-10-30T17:24:07.156Z] Copying: 228/1024 [MB] (10 MBps) [2024-10-30T17:24:08.128Z] Copying: 240/1024 [MB] (12 MBps) [2024-10-30T17:24:09.067Z] Copying: 251/1024 [MB] (10 MBps) [2024-10-30T17:24:10.010Z] Copying: 261/1024 [MB] (10 MBps) [2024-10-30T17:24:10.953Z] Copying: 276/1024 [MB] (14 MBps) [2024-10-30T17:24:11.895Z] Copying: 292/1024 [MB] (16 MBps) [2024-10-30T17:24:12.839Z] Copying: 309/1024 [MB] (17 MBps) [2024-10-30T17:24:14.227Z] Copying: 330/1024 [MB] (21 MBps) [2024-10-30T17:24:15.173Z] Copying: 347/1024 [MB] (16 MBps) [2024-10-30T17:24:16.118Z] Copying: 366/1024 [MB] (19 MBps) [2024-10-30T17:24:17.059Z] Copying: 389/1024 [MB] (22 MBps) [2024-10-30T17:24:18.002Z] Copying: 406/1024 [MB] (17 MBps) [2024-10-30T17:24:18.947Z] Copying: 427/1024 [MB] (20 MBps) [2024-10-30T17:24:19.890Z] Copying: 439/1024 [MB] (12 MBps) [2024-10-30T17:24:20.832Z] Copying: 462/1024 [MB] (22 MBps) [2024-10-30T17:24:22.217Z] Copying: 480/1024 [MB] (18 MBps) [2024-10-30T17:24:23.163Z] Copying: 491/1024 [MB] (10 MBps) [2024-10-30T17:24:24.106Z] Copying: 504/1024 [MB] (12 MBps) [2024-10-30T17:24:25.050Z] Copying: 526/1024 [MB] (22 MBps) [2024-10-30T17:24:25.996Z] Copying: 544/1024 [MB] (17 MBps) [2024-10-30T17:24:26.938Z] Copying: 568/1024 [MB] (23 MBps) [2024-10-30T17:24:27.880Z] Copying: 587/1024 [MB] (19 MBps) [2024-10-30T17:24:29.266Z] Copying: 608/1024 [MB] (21 MBps) [2024-10-30T17:24:29.840Z] Copying: 631/1024 [MB] (22 MBps) [2024-10-30T17:24:31.227Z] Copying: 651/1024 [MB] (19 MBps) [2024-10-30T17:24:32.174Z] Copying: 677/1024 [MB] (26 MBps) [2024-10-30T17:24:33.143Z] Copying: 689/1024 [MB] (12 MBps) [2024-10-30T17:24:33.834Z] Copying: 713/1024 [MB] (23 MBps) [2024-10-30T17:24:35.222Z] Copying: 732/1024 [MB] (19 MBps) [2024-10-30T17:24:36.197Z] Copying: 759/1024 [MB] (27 MBps) [2024-10-30T17:24:37.142Z] Copying: 775/1024 [MB] (15 MBps) [2024-10-30T17:24:38.086Z] Copying: 786/1024 [MB] (10 MBps) [2024-10-30T17:24:39.032Z] Copying: 802/1024 [MB] (15 MBps) [2024-10-30T17:24:39.976Z] Copying: 814/1024 [MB] (12 MBps) [2024-10-30T17:24:40.919Z] Copying: 832/1024 [MB] (18 MBps) [2024-10-30T17:24:41.864Z] Copying: 855/1024 [MB] (22 MBps) [2024-10-30T17:24:43.252Z] Copying: 872/1024 [MB] (16 MBps) [2024-10-30T17:24:44.195Z] Copying: 892/1024 [MB] (20 MBps) [2024-10-30T17:24:45.137Z] Copying: 915/1024 [MB] (23 MBps) [2024-10-30T17:24:46.083Z] Copying: 937/1024 [MB] (21 MBps) [2024-10-30T17:24:47.028Z] Copying: 955/1024 [MB] (17 MBps) [2024-10-30T17:24:47.974Z] Copying: 967/1024 [MB] (12 MBps) [2024-10-30T17:24:48.918Z] Copying: 989/1024 [MB] (21 MBps) [2024-10-30T17:24:49.862Z] Copying: 1006/1024 [MB] (17 MBps) [2024-10-30T17:24:49.862Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-10-30 17:24:49.692469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.881 [2024-10-30 17:24:49.692598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:06.881 [2024-10-30 17:24:49.692633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:06.881 [2024-10-30 17:24:49.692656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.881 [2024-10-30 17:24:49.692736] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:06.881 [2024-10-30 17:24:49.700016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.881 [2024-10-30 17:24:49.700092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:06.881 [2024-10-30 17:24:49.700117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.241 ms 00:21:06.881 [2024-10-30 17:24:49.700138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.881 [2024-10-30 17:24:49.700744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.881 [2024-10-30 17:24:49.700796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:06.881 [2024-10-30 17:24:49.700820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:21:06.881 [2024-10-30 17:24:49.700841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.881 [2024-10-30 17:24:49.708615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.881 [2024-10-30 17:24:49.708673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:06.881 [2024-10-30 17:24:49.708685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.737 ms 00:21:06.881 [2024-10-30 17:24:49.708693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.881 [2024-10-30 17:24:49.715047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.881 [2024-10-30 17:24:49.715093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:06.881 [2024-10-30 17:24:49.715106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.307 ms 00:21:06.881 [2024-10-30 17:24:49.715116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.881 [2024-10-30 17:24:49.742029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.881 [2024-10-30 17:24:49.742075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:06.881 [2024-10-30 17:24:49.742089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.859 ms 00:21:06.881 [2024-10-30 17:24:49.742098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.881 [2024-10-30 17:24:49.758797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.881 [2024-10-30 17:24:49.758841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:06.881 [2024-10-30 17:24:49.758862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.652 ms 00:21:06.881 [2024-10-30 17:24:49.758871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.142 [2024-10-30 17:24:49.957820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.142 [2024-10-30 17:24:49.957892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:07.142 [2024-10-30 17:24:49.957906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 198.896 ms 00:21:07.142 [2024-10-30 17:24:49.957914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.142 [2024-10-30 17:24:49.984922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.142 [2024-10-30 17:24:49.984964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:07.142 [2024-10-30 17:24:49.984976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.992 ms 00:21:07.142 [2024-10-30 17:24:49.984983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.142 [2024-10-30 17:24:50.010190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.142 [2024-10-30 17:24:50.010255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:07.142 [2024-10-30 17:24:50.010279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.163 ms 00:21:07.142 [2024-10-30 17:24:50.010287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.142 [2024-10-30 17:24:50.035062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.142 [2024-10-30 17:24:50.035107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:07.142 [2024-10-30 17:24:50.035119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.729 ms 00:21:07.142 [2024-10-30 17:24:50.035127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.142 [2024-10-30 17:24:50.059712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.142 [2024-10-30 17:24:50.059760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:07.142 [2024-10-30 17:24:50.059773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.512 ms 00:21:07.142 [2024-10-30 17:24:50.059782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.142 [2024-10-30 17:24:50.059827] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:07.142 [2024-10-30 17:24:50.059844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:21:07.142 [2024-10-30 17:24:50.059856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.059865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.059873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.059882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.059891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.059900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.059908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.059916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.059925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.059933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.059941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.059949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.059957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.059966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.059974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.059982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.059989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.059998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.060005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.060015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.060024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.060032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.060040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.060048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.060056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:07.142 [2024-10-30 17:24:50.060063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:07.143 [2024-10-30 17:24:50.060695] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:07.143 [2024-10-30 17:24:50.060703] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6680e01b-326a-4063-9fcb-aad95718bcea 00:21:07.143 [2024-10-30 17:24:50.060712] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:21:07.143 [2024-10-30 17:24:50.060720] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 40128 00:21:07.143 [2024-10-30 17:24:50.060727] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 39168 00:21:07.143 [2024-10-30 17:24:50.060735] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0245 00:21:07.143 [2024-10-30 17:24:50.060743] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:07.143 [2024-10-30 17:24:50.060750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:07.143 [2024-10-30 17:24:50.060763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:07.143 [2024-10-30 17:24:50.060780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:07.143 [2024-10-30 17:24:50.060787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:07.143 [2024-10-30 17:24:50.060795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.143 [2024-10-30 17:24:50.060802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:07.143 [2024-10-30 17:24:50.060811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:21:07.143 [2024-10-30 17:24:50.060820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.143 [2024-10-30 17:24:50.074582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.143 [2024-10-30 17:24:50.074625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:07.143 [2024-10-30 17:24:50.074636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.742 ms 00:21:07.143 [2024-10-30 17:24:50.074645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.143 [2024-10-30 17:24:50.075057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.143 [2024-10-30 17:24:50.075079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:07.143 [2024-10-30 17:24:50.075089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:21:07.143 [2024-10-30 17:24:50.075096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.143 [2024-10-30 17:24:50.111643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.143 [2024-10-30 17:24:50.111694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:07.143 [2024-10-30 17:24:50.111709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.143 [2024-10-30 17:24:50.111718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.144 [2024-10-30 17:24:50.111788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.144 [2024-10-30 17:24:50.111798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:07.144 [2024-10-30 17:24:50.111807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.144 [2024-10-30 17:24:50.111815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.144 [2024-10-30 17:24:50.111902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.144 [2024-10-30 17:24:50.111914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:07.144 [2024-10-30 17:24:50.111923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.144 [2024-10-30 17:24:50.111935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.144 [2024-10-30 17:24:50.111952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.144 [2024-10-30 17:24:50.111960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:07.144 [2024-10-30 17:24:50.111968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.144 [2024-10-30 17:24:50.111976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.404 [2024-10-30 17:24:50.195820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.404 [2024-10-30 17:24:50.195878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:07.405 [2024-10-30 17:24:50.195891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.405 [2024-10-30 17:24:50.195906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.405 [2024-10-30 17:24:50.264331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.405 [2024-10-30 17:24:50.264391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:07.405 [2024-10-30 17:24:50.264404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.405 [2024-10-30 17:24:50.264414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.405 [2024-10-30 17:24:50.264477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.405 [2024-10-30 17:24:50.264487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:07.405 [2024-10-30 17:24:50.264496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.405 [2024-10-30 17:24:50.264505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.405 [2024-10-30 17:24:50.264571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.405 [2024-10-30 17:24:50.264581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:07.405 [2024-10-30 17:24:50.264591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.405 [2024-10-30 17:24:50.264599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.405 [2024-10-30 17:24:50.264697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.405 [2024-10-30 17:24:50.264709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:07.405 [2024-10-30 17:24:50.264718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.405 [2024-10-30 17:24:50.264726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.405 [2024-10-30 17:24:50.264758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.405 [2024-10-30 17:24:50.264771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:07.405 [2024-10-30 17:24:50.264780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.405 [2024-10-30 17:24:50.264788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.405 [2024-10-30 17:24:50.264829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.405 [2024-10-30 17:24:50.264838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:07.405 [2024-10-30 17:24:50.264847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.405 [2024-10-30 17:24:50.264855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.405 [2024-10-30 17:24:50.264905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.405 [2024-10-30 17:24:50.264915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:07.405 [2024-10-30 17:24:50.264924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.405 [2024-10-30 17:24:50.264932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.405 [2024-10-30 17:24:50.265067] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 573.175 ms, result 0 00:21:08.350 00:21:08.350 00:21:08.350 17:24:51 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:10.267 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:10.267 17:24:52 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:10.267 17:24:52 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:21:10.267 17:24:52 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:10.267 17:24:53 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:10.267 17:24:53 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:10.267 17:24:53 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74336 00:21:10.267 17:24:53 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 74336 ']' 00:21:10.267 17:24:53 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 74336 00:21:10.267 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74336) - No such process 00:21:10.267 17:24:53 ftl.ftl_restore -- common/autotest_common.sh@979 -- # echo 'Process with pid 74336 is not found' 00:21:10.267 Process with pid 74336 is not found 00:21:10.267 17:24:53 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:21:10.267 17:24:53 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:10.267 Remove shared memory files 00:21:10.267 17:24:53 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:21:10.267 17:24:53 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:21:10.267 17:24:53 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:21:10.267 17:24:53 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:10.267 17:24:53 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:21:10.267 00:21:10.267 real 3m56.261s 00:21:10.267 user 3m44.818s 00:21:10.267 sys 0m11.862s 00:21:10.267 ************************************ 00:21:10.267 END TEST ftl_restore 00:21:10.267 ************************************ 00:21:10.267 17:24:53 ftl.ftl_restore -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:10.267 17:24:53 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:10.267 17:24:53 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:10.267 17:24:53 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:21:10.267 17:24:53 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:10.267 17:24:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:10.267 ************************************ 00:21:10.267 START TEST ftl_dirty_shutdown 00:21:10.267 ************************************ 00:21:10.267 17:24:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:10.528 * Looking for test storage... 00:21:10.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:10.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.528 --rc genhtml_branch_coverage=1 00:21:10.528 --rc genhtml_function_coverage=1 00:21:10.528 --rc genhtml_legend=1 00:21:10.528 --rc geninfo_all_blocks=1 00:21:10.528 --rc geninfo_unexecuted_blocks=1 00:21:10.528 00:21:10.528 ' 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:10.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.528 --rc genhtml_branch_coverage=1 00:21:10.528 --rc genhtml_function_coverage=1 00:21:10.528 --rc genhtml_legend=1 00:21:10.528 --rc geninfo_all_blocks=1 00:21:10.528 --rc geninfo_unexecuted_blocks=1 00:21:10.528 00:21:10.528 ' 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:10.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.528 --rc genhtml_branch_coverage=1 00:21:10.528 --rc genhtml_function_coverage=1 00:21:10.528 --rc genhtml_legend=1 00:21:10.528 --rc geninfo_all_blocks=1 00:21:10.528 --rc geninfo_unexecuted_blocks=1 00:21:10.528 00:21:10.528 ' 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:10.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.528 --rc genhtml_branch_coverage=1 00:21:10.528 --rc genhtml_function_coverage=1 00:21:10.528 --rc genhtml_legend=1 00:21:10.528 --rc geninfo_all_blocks=1 00:21:10.528 --rc geninfo_unexecuted_blocks=1 00:21:10.528 00:21:10.528 ' 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=76855 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 76855 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # '[' -z 76855 ']' 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:10.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:10.528 17:24:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:10.528 [2024-10-30 17:24:53.465081] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:21:10.528 [2024-10-30 17:24:53.465250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76855 ] 00:21:10.789 [2024-10-30 17:24:53.626475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.789 [2024-10-30 17:24:53.747066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.734 17:24:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:11.735 17:24:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # return 0 00:21:11.735 17:24:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:11.735 17:24:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:21:11.735 17:24:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:11.735 17:24:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:21:11.735 17:24:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:21:11.735 17:24:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:11.997 17:24:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:11.997 17:24:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:21:11.997 17:24:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:11.997 17:24:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:21:11.997 17:24:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:21:11.997 17:24:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:21:11.997 17:24:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:21:11.997 17:24:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:11.997 17:24:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:21:11.997 { 00:21:11.997 "name": "nvme0n1", 00:21:11.997 "aliases": [ 00:21:11.997 "1192e66d-ca92-46f6-8c98-faf443ad88d0" 00:21:11.997 ], 00:21:11.997 "product_name": "NVMe disk", 00:21:11.997 "block_size": 4096, 00:21:11.997 "num_blocks": 1310720, 00:21:11.997 "uuid": "1192e66d-ca92-46f6-8c98-faf443ad88d0", 00:21:11.997 "numa_id": -1, 00:21:11.997 "assigned_rate_limits": { 00:21:11.997 "rw_ios_per_sec": 0, 00:21:11.997 "rw_mbytes_per_sec": 0, 00:21:11.997 "r_mbytes_per_sec": 0, 00:21:11.997 "w_mbytes_per_sec": 0 00:21:11.997 }, 00:21:11.997 "claimed": true, 00:21:11.997 "claim_type": "read_many_write_one", 00:21:11.997 "zoned": false, 00:21:11.997 "supported_io_types": { 00:21:11.997 "read": true, 00:21:11.997 "write": true, 00:21:11.997 "unmap": true, 00:21:11.997 "flush": true, 00:21:11.997 "reset": true, 00:21:11.997 "nvme_admin": true, 00:21:11.997 "nvme_io": true, 00:21:11.997 "nvme_io_md": false, 00:21:11.997 "write_zeroes": true, 00:21:11.997 "zcopy": false, 00:21:11.997 "get_zone_info": false, 00:21:11.997 "zone_management": false, 00:21:11.997 "zone_append": false, 00:21:11.997 "compare": true, 00:21:11.997 "compare_and_write": false, 00:21:11.997 "abort": true, 00:21:11.997 "seek_hole": false, 00:21:11.997 "seek_data": false, 00:21:11.997 "copy": true, 00:21:11.997 "nvme_iov_md": false 00:21:11.997 }, 00:21:11.997 "driver_specific": { 00:21:11.997 "nvme": [ 00:21:11.997 { 00:21:11.997 "pci_address": "0000:00:11.0", 00:21:11.997 "trid": { 00:21:11.997 "trtype": "PCIe", 00:21:11.997 "traddr": "0000:00:11.0" 00:21:11.997 }, 00:21:11.997 "ctrlr_data": { 00:21:11.997 "cntlid": 0, 00:21:11.997 "vendor_id": "0x1b36", 00:21:11.997 "model_number": "QEMU NVMe Ctrl", 00:21:11.997 "serial_number": "12341", 00:21:11.997 "firmware_revision": "8.0.0", 00:21:11.997 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:11.997 "oacs": { 00:21:11.997 "security": 0, 00:21:11.997 "format": 1, 00:21:11.997 "firmware": 0, 00:21:11.997 "ns_manage": 1 00:21:11.997 }, 00:21:11.997 "multi_ctrlr": false, 00:21:11.997 "ana_reporting": false 00:21:11.997 }, 00:21:11.997 "vs": { 00:21:11.997 "nvme_version": "1.4" 00:21:11.997 }, 00:21:11.997 "ns_data": { 00:21:11.997 "id": 1, 00:21:11.997 "can_share": false 00:21:11.997 } 00:21:11.997 } 00:21:11.997 ], 00:21:11.997 "mp_policy": "active_passive" 00:21:11.997 } 00:21:11.997 } 00:21:11.997 ]' 00:21:11.997 17:24:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:21:11.997 17:24:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:21:12.258 17:24:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:21:12.258 17:24:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:21:12.258 17:24:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:21:12.258 17:24:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:21:12.258 17:24:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:21:12.258 17:24:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:12.258 17:24:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:21:12.258 17:24:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:12.258 17:24:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:12.258 17:24:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=33f7c999-42ea-4296-87bd-66a0f6794f66 00:21:12.258 17:24:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:21:12.258 17:24:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 33f7c999-42ea-4296-87bd-66a0f6794f66 00:21:12.519 17:24:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:12.781 17:24:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=fd1a2574-da88-4343-8696-180827f642a3 00:21:12.781 17:24:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fd1a2574-da88-4343-8696-180827f642a3 00:21:13.042 17:24:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=57aaeb10-79f6-4957-ba68-5679bcca7173 00:21:13.042 17:24:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:21:13.042 17:24:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 57aaeb10-79f6-4957-ba68-5679bcca7173 00:21:13.042 17:24:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:21:13.042 17:24:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:13.043 17:24:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=57aaeb10-79f6-4957-ba68-5679bcca7173 00:21:13.043 17:24:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:21:13.043 17:24:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 57aaeb10-79f6-4957-ba68-5679bcca7173 00:21:13.043 17:24:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=57aaeb10-79f6-4957-ba68-5679bcca7173 00:21:13.043 17:24:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:21:13.043 17:24:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:21:13.043 17:24:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:21:13.043 17:24:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 57aaeb10-79f6-4957-ba68-5679bcca7173 00:21:13.304 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:21:13.304 { 00:21:13.304 "name": "57aaeb10-79f6-4957-ba68-5679bcca7173", 00:21:13.304 "aliases": [ 00:21:13.304 "lvs/nvme0n1p0" 00:21:13.304 ], 00:21:13.304 "product_name": "Logical Volume", 00:21:13.304 "block_size": 4096, 00:21:13.304 "num_blocks": 26476544, 00:21:13.304 "uuid": "57aaeb10-79f6-4957-ba68-5679bcca7173", 00:21:13.304 "assigned_rate_limits": { 00:21:13.304 "rw_ios_per_sec": 0, 00:21:13.304 "rw_mbytes_per_sec": 0, 00:21:13.304 "r_mbytes_per_sec": 0, 00:21:13.304 "w_mbytes_per_sec": 0 00:21:13.304 }, 00:21:13.305 "claimed": false, 00:21:13.305 "zoned": false, 00:21:13.305 "supported_io_types": { 00:21:13.305 "read": true, 00:21:13.305 "write": true, 00:21:13.305 "unmap": true, 00:21:13.305 "flush": false, 00:21:13.305 "reset": true, 00:21:13.305 "nvme_admin": false, 00:21:13.305 "nvme_io": false, 00:21:13.305 "nvme_io_md": false, 00:21:13.305 "write_zeroes": true, 00:21:13.305 "zcopy": false, 00:21:13.305 "get_zone_info": false, 00:21:13.305 "zone_management": false, 00:21:13.305 "zone_append": false, 00:21:13.305 "compare": false, 00:21:13.305 "compare_and_write": false, 00:21:13.305 "abort": false, 00:21:13.305 "seek_hole": true, 00:21:13.305 "seek_data": true, 00:21:13.305 "copy": false, 00:21:13.305 "nvme_iov_md": false 00:21:13.305 }, 00:21:13.305 "driver_specific": { 00:21:13.305 "lvol": { 00:21:13.305 "lvol_store_uuid": "fd1a2574-da88-4343-8696-180827f642a3", 00:21:13.305 "base_bdev": "nvme0n1", 00:21:13.305 "thin_provision": true, 00:21:13.305 "num_allocated_clusters": 0, 00:21:13.305 "snapshot": false, 00:21:13.305 "clone": false, 00:21:13.305 "esnap_clone": false 00:21:13.305 } 00:21:13.305 } 00:21:13.305 } 00:21:13.305 ]' 00:21:13.305 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:21:13.305 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:21:13.305 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:21:13.305 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:21:13.305 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:21:13.305 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:21:13.305 17:24:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:21:13.305 17:24:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:21:13.305 17:24:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:13.566 17:24:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:13.566 17:24:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:13.566 17:24:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 57aaeb10-79f6-4957-ba68-5679bcca7173 00:21:13.566 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=57aaeb10-79f6-4957-ba68-5679bcca7173 00:21:13.566 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:21:13.566 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:21:13.566 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:21:13.566 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 57aaeb10-79f6-4957-ba68-5679bcca7173 00:21:13.828 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:21:13.828 { 00:21:13.828 "name": "57aaeb10-79f6-4957-ba68-5679bcca7173", 00:21:13.828 "aliases": [ 00:21:13.828 "lvs/nvme0n1p0" 00:21:13.828 ], 00:21:13.828 "product_name": "Logical Volume", 00:21:13.828 "block_size": 4096, 00:21:13.828 "num_blocks": 26476544, 00:21:13.828 "uuid": "57aaeb10-79f6-4957-ba68-5679bcca7173", 00:21:13.828 "assigned_rate_limits": { 00:21:13.828 "rw_ios_per_sec": 0, 00:21:13.828 "rw_mbytes_per_sec": 0, 00:21:13.828 "r_mbytes_per_sec": 0, 00:21:13.828 "w_mbytes_per_sec": 0 00:21:13.828 }, 00:21:13.828 "claimed": false, 00:21:13.828 "zoned": false, 00:21:13.828 "supported_io_types": { 00:21:13.828 "read": true, 00:21:13.828 "write": true, 00:21:13.828 "unmap": true, 00:21:13.828 "flush": false, 00:21:13.828 "reset": true, 00:21:13.828 "nvme_admin": false, 00:21:13.828 "nvme_io": false, 00:21:13.828 "nvme_io_md": false, 00:21:13.828 "write_zeroes": true, 00:21:13.828 "zcopy": false, 00:21:13.828 "get_zone_info": false, 00:21:13.828 "zone_management": false, 00:21:13.828 "zone_append": false, 00:21:13.828 "compare": false, 00:21:13.828 "compare_and_write": false, 00:21:13.828 "abort": false, 00:21:13.828 "seek_hole": true, 00:21:13.828 "seek_data": true, 00:21:13.828 "copy": false, 00:21:13.828 "nvme_iov_md": false 00:21:13.828 }, 00:21:13.828 "driver_specific": { 00:21:13.828 "lvol": { 00:21:13.828 "lvol_store_uuid": "fd1a2574-da88-4343-8696-180827f642a3", 00:21:13.828 "base_bdev": "nvme0n1", 00:21:13.828 "thin_provision": true, 00:21:13.828 "num_allocated_clusters": 0, 00:21:13.828 "snapshot": false, 00:21:13.828 "clone": false, 00:21:13.828 "esnap_clone": false 00:21:13.828 } 00:21:13.828 } 00:21:13.828 } 00:21:13.828 ]' 00:21:13.828 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:21:13.828 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:21:13.828 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:21:13.828 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:21:13.828 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:21:13.828 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:21:13.828 17:24:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:21:13.828 17:24:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:14.090 17:24:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:21:14.090 17:24:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 57aaeb10-79f6-4957-ba68-5679bcca7173 00:21:14.090 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=57aaeb10-79f6-4957-ba68-5679bcca7173 00:21:14.090 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:21:14.090 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:21:14.090 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:21:14.090 17:24:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 57aaeb10-79f6-4957-ba68-5679bcca7173 00:21:14.352 17:24:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:21:14.352 { 00:21:14.352 "name": "57aaeb10-79f6-4957-ba68-5679bcca7173", 00:21:14.352 "aliases": [ 00:21:14.352 "lvs/nvme0n1p0" 00:21:14.352 ], 00:21:14.352 "product_name": "Logical Volume", 00:21:14.352 "block_size": 4096, 00:21:14.352 "num_blocks": 26476544, 00:21:14.352 "uuid": "57aaeb10-79f6-4957-ba68-5679bcca7173", 00:21:14.352 "assigned_rate_limits": { 00:21:14.352 "rw_ios_per_sec": 0, 00:21:14.352 "rw_mbytes_per_sec": 0, 00:21:14.352 "r_mbytes_per_sec": 0, 00:21:14.352 "w_mbytes_per_sec": 0 00:21:14.352 }, 00:21:14.352 "claimed": false, 00:21:14.352 "zoned": false, 00:21:14.352 "supported_io_types": { 00:21:14.352 "read": true, 00:21:14.352 "write": true, 00:21:14.352 "unmap": true, 00:21:14.352 "flush": false, 00:21:14.352 "reset": true, 00:21:14.352 "nvme_admin": false, 00:21:14.352 "nvme_io": false, 00:21:14.352 "nvme_io_md": false, 00:21:14.352 "write_zeroes": true, 00:21:14.352 "zcopy": false, 00:21:14.352 "get_zone_info": false, 00:21:14.352 "zone_management": false, 00:21:14.352 "zone_append": false, 00:21:14.352 "compare": false, 00:21:14.352 "compare_and_write": false, 00:21:14.352 "abort": false, 00:21:14.352 "seek_hole": true, 00:21:14.352 "seek_data": true, 00:21:14.352 "copy": false, 00:21:14.352 "nvme_iov_md": false 00:21:14.352 }, 00:21:14.352 "driver_specific": { 00:21:14.352 "lvol": { 00:21:14.352 "lvol_store_uuid": "fd1a2574-da88-4343-8696-180827f642a3", 00:21:14.352 "base_bdev": "nvme0n1", 00:21:14.352 "thin_provision": true, 00:21:14.352 "num_allocated_clusters": 0, 00:21:14.352 "snapshot": false, 00:21:14.352 "clone": false, 00:21:14.352 "esnap_clone": false 00:21:14.352 } 00:21:14.352 } 00:21:14.352 } 00:21:14.352 ]' 00:21:14.352 17:24:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:21:14.352 17:24:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:21:14.352 17:24:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:21:14.352 17:24:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:21:14.352 17:24:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:21:14.352 17:24:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:21:14.352 17:24:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:21:14.352 17:24:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 57aaeb10-79f6-4957-ba68-5679bcca7173 --l2p_dram_limit 10' 00:21:14.352 17:24:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:21:14.352 17:24:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:21:14.352 17:24:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:14.352 17:24:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 57aaeb10-79f6-4957-ba68-5679bcca7173 --l2p_dram_limit 10 -c nvc0n1p0 00:21:14.612 [2024-10-30 17:24:57.425168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.612 [2024-10-30 17:24:57.425220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:14.612 [2024-10-30 17:24:57.425234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:14.612 [2024-10-30 17:24:57.425242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.612 [2024-10-30 17:24:57.425294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.612 [2024-10-30 17:24:57.425302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:14.612 [2024-10-30 17:24:57.425310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:14.612 [2024-10-30 17:24:57.425316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.612 [2024-10-30 17:24:57.425336] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:14.612 [2024-10-30 17:24:57.425939] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:14.612 [2024-10-30 17:24:57.425968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.612 [2024-10-30 17:24:57.425974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:14.612 [2024-10-30 17:24:57.425982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:21:14.612 [2024-10-30 17:24:57.425988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.612 [2024-10-30 17:24:57.426048] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 616d42d0-a4d9-43e9-b1e2-61151db08656 00:21:14.612 [2024-10-30 17:24:57.427050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.612 [2024-10-30 17:24:57.427082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:14.612 [2024-10-30 17:24:57.427090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:14.612 [2024-10-30 17:24:57.427099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.612 [2024-10-30 17:24:57.432025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.612 [2024-10-30 17:24:57.432058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:14.612 [2024-10-30 17:24:57.432066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.893 ms 00:21:14.612 [2024-10-30 17:24:57.432076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.612 [2024-10-30 17:24:57.432144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.612 [2024-10-30 17:24:57.432153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:14.612 [2024-10-30 17:24:57.432160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:14.612 [2024-10-30 17:24:57.432169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.612 [2024-10-30 17:24:57.432217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.612 [2024-10-30 17:24:57.432228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:14.612 [2024-10-30 17:24:57.432234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:14.613 [2024-10-30 17:24:57.432241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.613 [2024-10-30 17:24:57.432260] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:14.613 [2024-10-30 17:24:57.435171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.613 [2024-10-30 17:24:57.435205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:14.613 [2024-10-30 17:24:57.435214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.915 ms 00:21:14.613 [2024-10-30 17:24:57.435223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.613 [2024-10-30 17:24:57.435251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.613 [2024-10-30 17:24:57.435258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:14.613 [2024-10-30 17:24:57.435265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:14.613 [2024-10-30 17:24:57.435271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.613 [2024-10-30 17:24:57.435290] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:14.613 [2024-10-30 17:24:57.435397] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:14.613 [2024-10-30 17:24:57.435409] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:14.613 [2024-10-30 17:24:57.435418] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:14.613 [2024-10-30 17:24:57.435429] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:14.613 [2024-10-30 17:24:57.435436] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:14.613 [2024-10-30 17:24:57.435443] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:14.613 [2024-10-30 17:24:57.435449] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:14.613 [2024-10-30 17:24:57.435457] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:14.613 [2024-10-30 17:24:57.435462] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:14.613 [2024-10-30 17:24:57.435471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.613 [2024-10-30 17:24:57.435477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:14.613 [2024-10-30 17:24:57.435485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:21:14.613 [2024-10-30 17:24:57.435496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.613 [2024-10-30 17:24:57.435562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.613 [2024-10-30 17:24:57.435568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:14.613 [2024-10-30 17:24:57.435576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:14.613 [2024-10-30 17:24:57.435581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.613 [2024-10-30 17:24:57.435657] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:14.613 [2024-10-30 17:24:57.435666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:14.613 [2024-10-30 17:24:57.435673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:14.613 [2024-10-30 17:24:57.435680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:14.613 [2024-10-30 17:24:57.435687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:14.613 [2024-10-30 17:24:57.435692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:14.613 [2024-10-30 17:24:57.435698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:14.613 [2024-10-30 17:24:57.435704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:14.613 [2024-10-30 17:24:57.435711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:14.613 [2024-10-30 17:24:57.435716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:14.613 [2024-10-30 17:24:57.435722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:14.613 [2024-10-30 17:24:57.435727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:14.613 [2024-10-30 17:24:57.435733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:14.613 [2024-10-30 17:24:57.435738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:14.613 [2024-10-30 17:24:57.435745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:14.613 [2024-10-30 17:24:57.435749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:14.613 [2024-10-30 17:24:57.435757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:14.613 [2024-10-30 17:24:57.435763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:14.613 [2024-10-30 17:24:57.435769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:14.613 [2024-10-30 17:24:57.435774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:14.613 [2024-10-30 17:24:57.435781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:14.613 [2024-10-30 17:24:57.435785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:14.613 [2024-10-30 17:24:57.435792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:14.613 [2024-10-30 17:24:57.435796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:14.613 [2024-10-30 17:24:57.435802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:14.613 [2024-10-30 17:24:57.435807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:14.613 [2024-10-30 17:24:57.435813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:14.613 [2024-10-30 17:24:57.435818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:14.613 [2024-10-30 17:24:57.435824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:14.613 [2024-10-30 17:24:57.435829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:14.613 [2024-10-30 17:24:57.435836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:14.613 [2024-10-30 17:24:57.435841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:14.613 [2024-10-30 17:24:57.435848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:14.613 [2024-10-30 17:24:57.435853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:14.613 [2024-10-30 17:24:57.435859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:14.613 [2024-10-30 17:24:57.435864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:14.613 [2024-10-30 17:24:57.435871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:14.613 [2024-10-30 17:24:57.435876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:14.613 [2024-10-30 17:24:57.435883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:14.613 [2024-10-30 17:24:57.435887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:14.613 [2024-10-30 17:24:57.435894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:14.613 [2024-10-30 17:24:57.435898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:14.613 [2024-10-30 17:24:57.435905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:14.613 [2024-10-30 17:24:57.435909] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:14.613 [2024-10-30 17:24:57.435917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:14.613 [2024-10-30 17:24:57.435922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:14.613 [2024-10-30 17:24:57.435929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:14.613 [2024-10-30 17:24:57.435935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:14.613 [2024-10-30 17:24:57.435943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:14.613 [2024-10-30 17:24:57.435948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:14.613 [2024-10-30 17:24:57.435955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:14.613 [2024-10-30 17:24:57.435960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:14.613 [2024-10-30 17:24:57.435966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:14.613 [2024-10-30 17:24:57.435973] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:14.613 [2024-10-30 17:24:57.435982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:14.613 [2024-10-30 17:24:57.435988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:14.613 [2024-10-30 17:24:57.435995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:14.613 [2024-10-30 17:24:57.436000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:14.613 [2024-10-30 17:24:57.436007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:14.613 [2024-10-30 17:24:57.436012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:14.613 [2024-10-30 17:24:57.436019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:14.613 [2024-10-30 17:24:57.436024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:14.613 [2024-10-30 17:24:57.436031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:14.613 [2024-10-30 17:24:57.436036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:14.613 [2024-10-30 17:24:57.436044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:14.613 [2024-10-30 17:24:57.436049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:14.613 [2024-10-30 17:24:57.436056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:14.613 [2024-10-30 17:24:57.436061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:14.613 [2024-10-30 17:24:57.436070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:14.613 [2024-10-30 17:24:57.436075] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:14.613 [2024-10-30 17:24:57.436083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:14.613 [2024-10-30 17:24:57.436091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:14.613 [2024-10-30 17:24:57.436098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:14.613 [2024-10-30 17:24:57.436104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:14.613 [2024-10-30 17:24:57.436110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:14.614 [2024-10-30 17:24:57.436116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.614 [2024-10-30 17:24:57.436123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:14.614 [2024-10-30 17:24:57.436129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:21:14.614 [2024-10-30 17:24:57.436135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.614 [2024-10-30 17:24:57.436176] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:14.614 [2024-10-30 17:24:57.436187] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:18.818 [2024-10-30 17:25:01.198772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.819 [2024-10-30 17:25:01.198855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:18.819 [2024-10-30 17:25:01.198873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3762.580 ms 00:21:18.819 [2024-10-30 17:25:01.198885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.819 [2024-10-30 17:25:01.230603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.819 [2024-10-30 17:25:01.230666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:18.819 [2024-10-30 17:25:01.230680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.474 ms 00:21:18.819 [2024-10-30 17:25:01.230692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.819 [2024-10-30 17:25:01.230829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.819 [2024-10-30 17:25:01.230842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:18.819 [2024-10-30 17:25:01.230852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:18.819 [2024-10-30 17:25:01.230866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.819 [2024-10-30 17:25:01.266021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.819 [2024-10-30 17:25:01.266069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:18.819 [2024-10-30 17:25:01.266081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.119 ms 00:21:18.819 [2024-10-30 17:25:01.266092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.819 [2024-10-30 17:25:01.266125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.819 [2024-10-30 17:25:01.266138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:18.819 [2024-10-30 17:25:01.266147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:18.819 [2024-10-30 17:25:01.266160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.819 [2024-10-30 17:25:01.266782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.819 [2024-10-30 17:25:01.266808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:18.819 [2024-10-30 17:25:01.266819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:21:18.819 [2024-10-30 17:25:01.266829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.819 [2024-10-30 17:25:01.266943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.819 [2024-10-30 17:25:01.266964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:18.819 [2024-10-30 17:25:01.266974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:21:18.819 [2024-10-30 17:25:01.266987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.819 [2024-10-30 17:25:01.284213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.819 [2024-10-30 17:25:01.284258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:18.819 [2024-10-30 17:25:01.284270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.187 ms 00:21:18.819 [2024-10-30 17:25:01.284283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.819 [2024-10-30 17:25:01.297359] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:18.819 [2024-10-30 17:25:01.301359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.819 [2024-10-30 17:25:01.301400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:18.819 [2024-10-30 17:25:01.301416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.977 ms 00:21:18.819 [2024-10-30 17:25:01.301424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.819 [2024-10-30 17:25:01.415559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.819 [2024-10-30 17:25:01.415618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:18.819 [2024-10-30 17:25:01.415638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.094 ms 00:21:18.819 [2024-10-30 17:25:01.415647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.819 [2024-10-30 17:25:01.415853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.819 [2024-10-30 17:25:01.415866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:18.819 [2024-10-30 17:25:01.415880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:21:18.819 [2024-10-30 17:25:01.415893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.819 [2024-10-30 17:25:01.442049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.819 [2024-10-30 17:25:01.442098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:18.819 [2024-10-30 17:25:01.442114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.097 ms 00:21:18.819 [2024-10-30 17:25:01.442123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.819 [2024-10-30 17:25:01.466334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.819 [2024-10-30 17:25:01.466362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:18.819 [2024-10-30 17:25:01.466375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.159 ms 00:21:18.819 [2024-10-30 17:25:01.466382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.819 [2024-10-30 17:25:01.466951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.819 [2024-10-30 17:25:01.466961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:18.819 [2024-10-30 17:25:01.466971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:21:18.819 [2024-10-30 17:25:01.466978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.819 [2024-10-30 17:25:01.541988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.819 [2024-10-30 17:25:01.542131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:18.819 [2024-10-30 17:25:01.542156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.976 ms 00:21:18.819 [2024-10-30 17:25:01.542164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.819 [2024-10-30 17:25:01.567504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.819 [2024-10-30 17:25:01.567540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:18.819 [2024-10-30 17:25:01.567556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.253 ms 00:21:18.819 [2024-10-30 17:25:01.567563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.819 [2024-10-30 17:25:01.591132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.819 [2024-10-30 17:25:01.591167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:18.819 [2024-10-30 17:25:01.591179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.527 ms 00:21:18.819 [2024-10-30 17:25:01.591186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.819 [2024-10-30 17:25:01.615753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.819 [2024-10-30 17:25:01.615793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:18.819 [2024-10-30 17:25:01.615807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.511 ms 00:21:18.819 [2024-10-30 17:25:01.615813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.819 [2024-10-30 17:25:01.615861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.819 [2024-10-30 17:25:01.615870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:18.819 [2024-10-30 17:25:01.615885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:18.819 [2024-10-30 17:25:01.615892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.819 [2024-10-30 17:25:01.615974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.819 [2024-10-30 17:25:01.615985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:18.819 [2024-10-30 17:25:01.615994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:18.819 [2024-10-30 17:25:01.616002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.819 [2024-10-30 17:25:01.616993] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4191.372 ms, result 0 00:21:18.819 { 00:21:18.819 "name": "ftl0", 00:21:18.819 "uuid": "616d42d0-a4d9-43e9-b1e2-61151db08656" 00:21:18.819 } 00:21:18.819 17:25:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:21:18.819 17:25:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:19.079 17:25:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:21:19.079 17:25:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:21:19.079 17:25:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:21:19.340 /dev/nbd0 00:21:19.340 17:25:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:21:19.340 17:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:19.340 17:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # local i 00:21:19.340 17:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:19.340 17:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:19.340 17:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:19.340 17:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # break 00:21:19.340 17:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:19.340 17:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:19.340 17:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:21:19.340 1+0 records in 00:21:19.340 1+0 records out 00:21:19.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461944 s, 8.9 MB/s 00:21:19.340 17:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:19.340 17:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # size=4096 00:21:19.340 17:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:19.340 17:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:19.340 17:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # return 0 00:21:19.340 17:25:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:21:19.340 [2024-10-30 17:25:02.180165] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:21:19.340 [2024-10-30 17:25:02.180734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77010 ] 00:21:19.601 [2024-10-30 17:25:02.336557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.601 [2024-10-30 17:25:02.457192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.986  [2024-10-30T17:25:04.985Z] Copying: 189/1024 [MB] (189 MBps) [2024-10-30T17:25:05.969Z] Copying: 423/1024 [MB] (233 MBps) [2024-10-30T17:25:06.908Z] Copying: 682/1024 [MB] (259 MBps) [2024-10-30T17:25:07.168Z] Copying: 944/1024 [MB] (261 MBps) [2024-10-30T17:25:07.740Z] Copying: 1024/1024 [MB] (average 237 MBps) 00:21:24.759 00:21:24.759 17:25:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:26.673 17:25:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:21:26.673 [2024-10-30 17:25:09.649467] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:21:26.673 [2024-10-30 17:25:09.649584] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77086 ] 00:21:26.934 [2024-10-30 17:25:09.806226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.934 [2024-10-30 17:25:09.903785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.319  [2024-10-30T17:25:12.242Z] Copying: 21/1024 [MB] (21 MBps) [2024-10-30T17:25:13.185Z] Copying: 44/1024 [MB] (22 MBps) [2024-10-30T17:25:14.128Z] Copying: 66/1024 [MB] (22 MBps) [2024-10-30T17:25:15.515Z] Copying: 88/1024 [MB] (21 MBps) [2024-10-30T17:25:16.455Z] Copying: 109/1024 [MB] (21 MBps) [2024-10-30T17:25:17.397Z] Copying: 131/1024 [MB] (22 MBps) [2024-10-30T17:25:18.343Z] Copying: 154/1024 [MB] (22 MBps) [2024-10-30T17:25:19.287Z] Copying: 173/1024 [MB] (19 MBps) [2024-10-30T17:25:20.232Z] Copying: 195/1024 [MB] (22 MBps) [2024-10-30T17:25:21.171Z] Copying: 215/1024 [MB] (19 MBps) [2024-10-30T17:25:22.557Z] Copying: 237/1024 [MB] (22 MBps) [2024-10-30T17:25:23.126Z] Copying: 260/1024 [MB] (23 MBps) [2024-10-30T17:25:24.514Z] Copying: 283/1024 [MB] (22 MBps) [2024-10-30T17:25:25.452Z] Copying: 306/1024 [MB] (23 MBps) [2024-10-30T17:25:26.391Z] Copying: 331/1024 [MB] (25 MBps) [2024-10-30T17:25:27.334Z] Copying: 360/1024 [MB] (28 MBps) [2024-10-30T17:25:28.270Z] Copying: 384/1024 [MB] (23 MBps) [2024-10-30T17:25:29.213Z] Copying: 411/1024 [MB] (27 MBps) [2024-10-30T17:25:30.150Z] Copying: 436/1024 [MB] (24 MBps) [2024-10-30T17:25:31.530Z] Copying: 462/1024 [MB] (26 MBps) [2024-10-30T17:25:32.474Z] Copying: 489/1024 [MB] (27 MBps) [2024-10-30T17:25:33.513Z] Copying: 512/1024 [MB] (22 MBps) [2024-10-30T17:25:34.471Z] Copying: 531/1024 [MB] (19 MBps) [2024-10-30T17:25:35.414Z] Copying: 552/1024 [MB] (20 MBps) [2024-10-30T17:25:36.363Z] Copying: 575/1024 [MB] (22 MBps) [2024-10-30T17:25:37.307Z] Copying: 596/1024 [MB] (21 MBps) [2024-10-30T17:25:38.248Z] Copying: 617/1024 [MB] (20 MBps) [2024-10-30T17:25:39.189Z] Copying: 635/1024 [MB] (18 MBps) [2024-10-30T17:25:40.128Z] Copying: 657/1024 [MB] (21 MBps) [2024-10-30T17:25:41.506Z] Copying: 680/1024 [MB] (23 MBps) [2024-10-30T17:25:42.440Z] Copying: 701/1024 [MB] (20 MBps) [2024-10-30T17:25:43.373Z] Copying: 724/1024 [MB] (22 MBps) [2024-10-30T17:25:44.307Z] Copying: 758/1024 [MB] (33 MBps) [2024-10-30T17:25:45.240Z] Copying: 791/1024 [MB] (33 MBps) [2024-10-30T17:25:46.178Z] Copying: 824/1024 [MB] (32 MBps) [2024-10-30T17:25:47.118Z] Copying: 851/1024 [MB] (27 MBps) [2024-10-30T17:25:48.503Z] Copying: 872/1024 [MB] (20 MBps) [2024-10-30T17:25:49.445Z] Copying: 892/1024 [MB] (19 MBps) [2024-10-30T17:25:50.379Z] Copying: 918/1024 [MB] (25 MBps) [2024-10-30T17:25:51.312Z] Copying: 947/1024 [MB] (28 MBps) [2024-10-30T17:25:52.251Z] Copying: 980/1024 [MB] (33 MBps) [2024-10-30T17:25:52.820Z] Copying: 1007/1024 [MB] (27 MBps) [2024-10-30T17:25:53.386Z] Copying: 1024/1024 [MB] (average 23 MBps) 00:22:10.405 00:22:10.662 17:25:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:22:10.663 17:25:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:22:10.663 17:25:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:10.921 [2024-10-30 17:25:53.778797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.921 [2024-10-30 17:25:53.778846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:10.921 [2024-10-30 17:25:53.778858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:10.921 [2024-10-30 17:25:53.778866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.921 [2024-10-30 17:25:53.778884] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:10.921 [2024-10-30 17:25:53.781135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.921 [2024-10-30 17:25:53.781164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:10.921 [2024-10-30 17:25:53.781174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.235 ms 00:22:10.921 [2024-10-30 17:25:53.781181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.921 [2024-10-30 17:25:53.783829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.921 [2024-10-30 17:25:53.783856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:10.921 [2024-10-30 17:25:53.783866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.625 ms 00:22:10.921 [2024-10-30 17:25:53.783872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.921 [2024-10-30 17:25:53.800291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.921 [2024-10-30 17:25:53.800319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:10.921 [2024-10-30 17:25:53.800330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.401 ms 00:22:10.921 [2024-10-30 17:25:53.800339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.921 [2024-10-30 17:25:53.805107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.921 [2024-10-30 17:25:53.805130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:10.921 [2024-10-30 17:25:53.805141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.739 ms 00:22:10.921 [2024-10-30 17:25:53.805148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.921 [2024-10-30 17:25:53.824412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.921 [2024-10-30 17:25:53.824439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:10.921 [2024-10-30 17:25:53.824449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.170 ms 00:22:10.921 [2024-10-30 17:25:53.824456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.921 [2024-10-30 17:25:53.837753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.921 [2024-10-30 17:25:53.837781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:10.921 [2024-10-30 17:25:53.837791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.264 ms 00:22:10.921 [2024-10-30 17:25:53.837798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.921 [2024-10-30 17:25:53.837915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.921 [2024-10-30 17:25:53.837923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:10.921 [2024-10-30 17:25:53.837932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:22:10.921 [2024-10-30 17:25:53.837938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.921 [2024-10-30 17:25:53.856683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.921 [2024-10-30 17:25:53.856709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:10.921 [2024-10-30 17:25:53.856718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.729 ms 00:22:10.921 [2024-10-30 17:25:53.856725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.921 [2024-10-30 17:25:53.874885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.921 [2024-10-30 17:25:53.874909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:10.921 [2024-10-30 17:25:53.874919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.129 ms 00:22:10.921 [2024-10-30 17:25:53.874925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.921 [2024-10-30 17:25:53.892415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.921 [2024-10-30 17:25:53.892439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:10.921 [2024-10-30 17:25:53.892448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.455 ms 00:22:10.921 [2024-10-30 17:25:53.892454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.183 [2024-10-30 17:25:53.910000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.183 [2024-10-30 17:25:53.910025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:11.183 [2024-10-30 17:25:53.910035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.486 ms 00:22:11.183 [2024-10-30 17:25:53.910041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.183 [2024-10-30 17:25:53.910069] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:11.183 [2024-10-30 17:25:53.910080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:11.183 [2024-10-30 17:25:53.910533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:11.184 [2024-10-30 17:25:53.910785] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:11.184 [2024-10-30 17:25:53.910793] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 616d42d0-a4d9-43e9-b1e2-61151db08656 00:22:11.184 [2024-10-30 17:25:53.910800] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:11.184 [2024-10-30 17:25:53.910809] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:11.184 [2024-10-30 17:25:53.910815] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:11.184 [2024-10-30 17:25:53.910823] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:11.184 [2024-10-30 17:25:53.910829] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:11.184 [2024-10-30 17:25:53.910838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:11.184 [2024-10-30 17:25:53.910844] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:11.184 [2024-10-30 17:25:53.910851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:11.184 [2024-10-30 17:25:53.910856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:11.184 [2024-10-30 17:25:53.910864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.184 [2024-10-30 17:25:53.910870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:11.184 [2024-10-30 17:25:53.910878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:22:11.184 [2024-10-30 17:25:53.910885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.184 [2024-10-30 17:25:53.920985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.184 [2024-10-30 17:25:53.921009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:11.184 [2024-10-30 17:25:53.921019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.047 ms 00:22:11.184 [2024-10-30 17:25:53.921027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.184 [2024-10-30 17:25:53.921331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.184 [2024-10-30 17:25:53.921340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:11.184 [2024-10-30 17:25:53.921348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:22:11.184 [2024-10-30 17:25:53.921354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.184 [2024-10-30 17:25:53.955991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.184 [2024-10-30 17:25:53.956017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:11.184 [2024-10-30 17:25:53.956028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.184 [2024-10-30 17:25:53.956037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.184 [2024-10-30 17:25:53.956089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.184 [2024-10-30 17:25:53.956097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:11.184 [2024-10-30 17:25:53.956104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.184 [2024-10-30 17:25:53.956110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.184 [2024-10-30 17:25:53.956170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.184 [2024-10-30 17:25:53.956179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:11.184 [2024-10-30 17:25:53.956188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.184 [2024-10-30 17:25:53.956194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.184 [2024-10-30 17:25:53.956231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.184 [2024-10-30 17:25:53.956239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:11.184 [2024-10-30 17:25:53.956247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.184 [2024-10-30 17:25:53.956253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.184 [2024-10-30 17:25:54.020397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.184 [2024-10-30 17:25:54.020430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:11.184 [2024-10-30 17:25:54.020440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.184 [2024-10-30 17:25:54.020448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.184 [2024-10-30 17:25:54.072458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.184 [2024-10-30 17:25:54.072492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:11.184 [2024-10-30 17:25:54.072503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.184 [2024-10-30 17:25:54.072509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.184 [2024-10-30 17:25:54.072599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.184 [2024-10-30 17:25:54.072608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:11.184 [2024-10-30 17:25:54.072616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.184 [2024-10-30 17:25:54.072623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.184 [2024-10-30 17:25:54.072667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.184 [2024-10-30 17:25:54.072676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:11.184 [2024-10-30 17:25:54.072685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.184 [2024-10-30 17:25:54.072692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.184 [2024-10-30 17:25:54.072772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.184 [2024-10-30 17:25:54.072780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:11.184 [2024-10-30 17:25:54.072787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.184 [2024-10-30 17:25:54.072793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.184 [2024-10-30 17:25:54.072822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.184 [2024-10-30 17:25:54.072831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:11.184 [2024-10-30 17:25:54.072839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.184 [2024-10-30 17:25:54.072846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.184 [2024-10-30 17:25:54.072882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.184 [2024-10-30 17:25:54.072889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:11.184 [2024-10-30 17:25:54.072897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.184 [2024-10-30 17:25:54.072903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.184 [2024-10-30 17:25:54.072949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.184 [2024-10-30 17:25:54.072957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:11.184 [2024-10-30 17:25:54.072966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.184 [2024-10-30 17:25:54.072972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.184 [2024-10-30 17:25:54.073090] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 294.258 ms, result 0 00:22:11.184 true 00:22:11.185 17:25:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 76855 00:22:11.185 17:25:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid76855 00:22:11.185 17:25:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:22:11.444 [2024-10-30 17:25:54.164212] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:22:11.444 [2024-10-30 17:25:54.164333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77557 ] 00:22:11.444 [2024-10-30 17:25:54.319846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.444 [2024-10-30 17:25:54.413384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.819  [2024-10-30T17:25:56.735Z] Copying: 255/1024 [MB] (255 MBps) [2024-10-30T17:25:57.670Z] Copying: 513/1024 [MB] (257 MBps) [2024-10-30T17:25:59.043Z] Copying: 768/1024 [MB] (254 MBps) [2024-10-30T17:25:59.043Z] Copying: 1020/1024 [MB] (252 MBps) [2024-10-30T17:25:59.303Z] Copying: 1024/1024 [MB] (average 254 MBps) 00:22:16.322 00:22:16.322 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 76855 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:22:16.322 17:25:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:16.322 [2024-10-30 17:25:59.284307] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:22:16.322 [2024-10-30 17:25:59.284425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77615 ] 00:22:16.580 [2024-10-30 17:25:59.440530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.580 [2024-10-30 17:25:59.536168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.839 [2024-10-30 17:25:59.764679] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:16.839 [2024-10-30 17:25:59.764733] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:17.096 [2024-10-30 17:25:59.827908] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:22:17.096 [2024-10-30 17:25:59.828456] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:22:17.096 [2024-10-30 17:25:59.829039] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:22:17.355 [2024-10-30 17:26:00.261491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.355 [2024-10-30 17:26:00.261530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:17.355 [2024-10-30 17:26:00.261541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:17.355 [2024-10-30 17:26:00.261548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.355 [2024-10-30 17:26:00.261587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.355 [2024-10-30 17:26:00.261596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:17.355 [2024-10-30 17:26:00.261602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:17.355 [2024-10-30 17:26:00.261608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.355 [2024-10-30 17:26:00.261621] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:17.355 [2024-10-30 17:26:00.262192] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:17.355 [2024-10-30 17:26:00.262220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.355 [2024-10-30 17:26:00.262227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:17.355 [2024-10-30 17:26:00.262234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:22:17.355 [2024-10-30 17:26:00.262240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.355 [2024-10-30 17:26:00.263532] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:17.355 [2024-10-30 17:26:00.274082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.355 [2024-10-30 17:26:00.274114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:17.355 [2024-10-30 17:26:00.274122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.551 ms 00:22:17.355 [2024-10-30 17:26:00.274128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.355 [2024-10-30 17:26:00.274172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.355 [2024-10-30 17:26:00.274180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:17.355 [2024-10-30 17:26:00.274187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:17.355 [2024-10-30 17:26:00.274193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.355 [2024-10-30 17:26:00.280394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.355 [2024-10-30 17:26:00.280418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:17.355 [2024-10-30 17:26:00.280426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.153 ms 00:22:17.355 [2024-10-30 17:26:00.280432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.355 [2024-10-30 17:26:00.280491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.355 [2024-10-30 17:26:00.280498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:17.355 [2024-10-30 17:26:00.280504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:17.355 [2024-10-30 17:26:00.280510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.355 [2024-10-30 17:26:00.280547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.355 [2024-10-30 17:26:00.280558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:17.355 [2024-10-30 17:26:00.280565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:17.355 [2024-10-30 17:26:00.280571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.355 [2024-10-30 17:26:00.280586] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:17.355 [2024-10-30 17:26:00.283691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.355 [2024-10-30 17:26:00.283715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:17.355 [2024-10-30 17:26:00.283723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.110 ms 00:22:17.355 [2024-10-30 17:26:00.283729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.355 [2024-10-30 17:26:00.283757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.355 [2024-10-30 17:26:00.283764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:17.355 [2024-10-30 17:26:00.283770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:17.355 [2024-10-30 17:26:00.283776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.355 [2024-10-30 17:26:00.283790] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:17.355 [2024-10-30 17:26:00.283808] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:17.355 [2024-10-30 17:26:00.283836] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:17.355 [2024-10-30 17:26:00.283849] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:17.355 [2024-10-30 17:26:00.283932] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:17.355 [2024-10-30 17:26:00.283942] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:17.355 [2024-10-30 17:26:00.283949] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:17.355 [2024-10-30 17:26:00.283958] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:17.355 [2024-10-30 17:26:00.283967] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:17.355 [2024-10-30 17:26:00.283974] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:17.355 [2024-10-30 17:26:00.283980] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:17.355 [2024-10-30 17:26:00.283986] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:17.355 [2024-10-30 17:26:00.283991] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:17.355 [2024-10-30 17:26:00.283997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.355 [2024-10-30 17:26:00.284003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:17.355 [2024-10-30 17:26:00.284009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:22:17.355 [2024-10-30 17:26:00.284014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.355 [2024-10-30 17:26:00.284079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.355 [2024-10-30 17:26:00.284085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:17.355 [2024-10-30 17:26:00.284094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:17.355 [2024-10-30 17:26:00.284099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.355 [2024-10-30 17:26:00.284176] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:17.355 [2024-10-30 17:26:00.284185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:17.355 [2024-10-30 17:26:00.284192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:17.355 [2024-10-30 17:26:00.284208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:17.355 [2024-10-30 17:26:00.284215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:17.355 [2024-10-30 17:26:00.284221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:17.355 [2024-10-30 17:26:00.284227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:17.355 [2024-10-30 17:26:00.284233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:17.355 [2024-10-30 17:26:00.284238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:17.355 [2024-10-30 17:26:00.284243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:17.355 [2024-10-30 17:26:00.284249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:17.355 [2024-10-30 17:26:00.284262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:17.355 [2024-10-30 17:26:00.284268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:17.355 [2024-10-30 17:26:00.284273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:17.355 [2024-10-30 17:26:00.284280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:17.355 [2024-10-30 17:26:00.284285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:17.355 [2024-10-30 17:26:00.284291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:17.355 [2024-10-30 17:26:00.284297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:17.355 [2024-10-30 17:26:00.284302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:17.355 [2024-10-30 17:26:00.284308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:17.355 [2024-10-30 17:26:00.284313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:17.355 [2024-10-30 17:26:00.284319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:17.355 [2024-10-30 17:26:00.284324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:17.355 [2024-10-30 17:26:00.284330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:17.355 [2024-10-30 17:26:00.284336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:17.355 [2024-10-30 17:26:00.284341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:17.355 [2024-10-30 17:26:00.284346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:17.355 [2024-10-30 17:26:00.284352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:17.355 [2024-10-30 17:26:00.284357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:17.355 [2024-10-30 17:26:00.284362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:17.355 [2024-10-30 17:26:00.284367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:17.355 [2024-10-30 17:26:00.284373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:17.355 [2024-10-30 17:26:00.284378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:17.355 [2024-10-30 17:26:00.284384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:17.355 [2024-10-30 17:26:00.284388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:17.355 [2024-10-30 17:26:00.284393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:17.355 [2024-10-30 17:26:00.284398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:17.355 [2024-10-30 17:26:00.284403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:17.356 [2024-10-30 17:26:00.284408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:17.356 [2024-10-30 17:26:00.284413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:17.356 [2024-10-30 17:26:00.284418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:17.356 [2024-10-30 17:26:00.284424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:17.356 [2024-10-30 17:26:00.284429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:17.356 [2024-10-30 17:26:00.284435] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:17.356 [2024-10-30 17:26:00.284441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:17.356 [2024-10-30 17:26:00.284447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:17.356 [2024-10-30 17:26:00.284454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:17.356 [2024-10-30 17:26:00.284460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:17.356 [2024-10-30 17:26:00.284465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:17.356 [2024-10-30 17:26:00.284471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:17.356 [2024-10-30 17:26:00.284476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:17.356 [2024-10-30 17:26:00.284481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:17.356 [2024-10-30 17:26:00.284486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:17.356 [2024-10-30 17:26:00.284493] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:17.356 [2024-10-30 17:26:00.284500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:17.356 [2024-10-30 17:26:00.284506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:17.356 [2024-10-30 17:26:00.284513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:17.356 [2024-10-30 17:26:00.284518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:17.356 [2024-10-30 17:26:00.284524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:17.356 [2024-10-30 17:26:00.284530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:17.356 [2024-10-30 17:26:00.284535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:17.356 [2024-10-30 17:26:00.284541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:17.356 [2024-10-30 17:26:00.284546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:17.356 [2024-10-30 17:26:00.284552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:17.356 [2024-10-30 17:26:00.284557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:17.356 [2024-10-30 17:26:00.284563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:17.356 [2024-10-30 17:26:00.284568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:17.356 [2024-10-30 17:26:00.284573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:17.356 [2024-10-30 17:26:00.284579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:17.356 [2024-10-30 17:26:00.284585] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:17.356 [2024-10-30 17:26:00.284592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:17.356 [2024-10-30 17:26:00.284598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:17.356 [2024-10-30 17:26:00.284604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:17.356 [2024-10-30 17:26:00.284609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:17.356 [2024-10-30 17:26:00.284614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:17.356 [2024-10-30 17:26:00.284622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.356 [2024-10-30 17:26:00.284627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:17.356 [2024-10-30 17:26:00.284634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:22:17.356 [2024-10-30 17:26:00.284641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.356 [2024-10-30 17:26:00.309002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.356 [2024-10-30 17:26:00.309034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:17.356 [2024-10-30 17:26:00.309044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.318 ms 00:22:17.356 [2024-10-30 17:26:00.309050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.356 [2024-10-30 17:26:00.309118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.356 [2024-10-30 17:26:00.309127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:17.356 [2024-10-30 17:26:00.309133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:17.356 [2024-10-30 17:26:00.309140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.614 [2024-10-30 17:26:00.354909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.615 [2024-10-30 17:26:00.354947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:17.615 [2024-10-30 17:26:00.354957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.730 ms 00:22:17.615 [2024-10-30 17:26:00.354966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.615 [2024-10-30 17:26:00.355006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.615 [2024-10-30 17:26:00.355014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:17.615 [2024-10-30 17:26:00.355021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:17.615 [2024-10-30 17:26:00.355027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.615 [2024-10-30 17:26:00.355481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.615 [2024-10-30 17:26:00.355501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:17.615 [2024-10-30 17:26:00.355509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:22:17.615 [2024-10-30 17:26:00.355515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.615 [2024-10-30 17:26:00.355632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.615 [2024-10-30 17:26:00.355639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:17.615 [2024-10-30 17:26:00.355645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:22:17.615 [2024-10-30 17:26:00.355652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.615 [2024-10-30 17:26:00.367597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.615 [2024-10-30 17:26:00.367623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:17.615 [2024-10-30 17:26:00.367631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.928 ms 00:22:17.615 [2024-10-30 17:26:00.367638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.615 [2024-10-30 17:26:00.378494] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:17.615 [2024-10-30 17:26:00.378522] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:17.615 [2024-10-30 17:26:00.378533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.615 [2024-10-30 17:26:00.378541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:17.615 [2024-10-30 17:26:00.378550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.803 ms 00:22:17.615 [2024-10-30 17:26:00.378556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.615 [2024-10-30 17:26:00.397341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.615 [2024-10-30 17:26:00.397371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:17.615 [2024-10-30 17:26:00.397388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.750 ms 00:22:17.615 [2024-10-30 17:26:00.397395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.615 [2024-10-30 17:26:00.406721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.615 [2024-10-30 17:26:00.406749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:17.615 [2024-10-30 17:26:00.406758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.288 ms 00:22:17.615 [2024-10-30 17:26:00.406763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.615 [2024-10-30 17:26:00.416007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.615 [2024-10-30 17:26:00.416033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:17.615 [2024-10-30 17:26:00.416041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.215 ms 00:22:17.615 [2024-10-30 17:26:00.416047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.615 [2024-10-30 17:26:00.416627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.615 [2024-10-30 17:26:00.416648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:17.615 [2024-10-30 17:26:00.416656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:22:17.615 [2024-10-30 17:26:00.416662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.615 [2024-10-30 17:26:00.464630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.615 [2024-10-30 17:26:00.464687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:17.615 [2024-10-30 17:26:00.464700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.952 ms 00:22:17.615 [2024-10-30 17:26:00.464707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.615 [2024-10-30 17:26:00.473003] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:17.615 [2024-10-30 17:26:00.475555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.615 [2024-10-30 17:26:00.475581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:17.615 [2024-10-30 17:26:00.475592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.793 ms 00:22:17.615 [2024-10-30 17:26:00.475600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.615 [2024-10-30 17:26:00.475712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.615 [2024-10-30 17:26:00.475723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:17.615 [2024-10-30 17:26:00.475731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:17.615 [2024-10-30 17:26:00.475738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.615 [2024-10-30 17:26:00.475801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.615 [2024-10-30 17:26:00.475811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:17.615 [2024-10-30 17:26:00.475818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:17.615 [2024-10-30 17:26:00.475825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.615 [2024-10-30 17:26:00.475842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.615 [2024-10-30 17:26:00.475848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:17.615 [2024-10-30 17:26:00.475858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:17.615 [2024-10-30 17:26:00.475864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.615 [2024-10-30 17:26:00.475894] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:17.615 [2024-10-30 17:26:00.475903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.615 [2024-10-30 17:26:00.475910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:17.615 [2024-10-30 17:26:00.475917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:17.615 [2024-10-30 17:26:00.475922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.615 [2024-10-30 17:26:00.495146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.615 [2024-10-30 17:26:00.495183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:17.615 [2024-10-30 17:26:00.495193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.205 ms 00:22:17.615 [2024-10-30 17:26:00.495208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.615 [2024-10-30 17:26:00.495277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.615 [2024-10-30 17:26:00.495286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:17.615 [2024-10-30 17:26:00.495293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:17.615 [2024-10-30 17:26:00.495299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.615 [2024-10-30 17:26:00.496295] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 234.393 ms, result 0 00:22:18.557  [2024-10-30T17:26:02.521Z] Copying: 30/1024 [MB] (30 MBps) [2024-10-30T17:26:03.890Z] Copying: 53/1024 [MB] (23 MBps) [2024-10-30T17:26:04.821Z] Copying: 73/1024 [MB] (19 MBps) [2024-10-30T17:26:05.753Z] Copying: 93/1024 [MB] (19 MBps) [2024-10-30T17:26:06.690Z] Copying: 112/1024 [MB] (19 MBps) [2024-10-30T17:26:07.625Z] Copying: 129/1024 [MB] (16 MBps) [2024-10-30T17:26:08.558Z] Copying: 144/1024 [MB] (15 MBps) [2024-10-30T17:26:09.938Z] Copying: 164/1024 [MB] (19 MBps) [2024-10-30T17:26:10.870Z] Copying: 181/1024 [MB] (17 MBps) [2024-10-30T17:26:11.804Z] Copying: 197/1024 [MB] (16 MBps) [2024-10-30T17:26:12.742Z] Copying: 216/1024 [MB] (19 MBps) [2024-10-30T17:26:13.680Z] Copying: 228/1024 [MB] (11 MBps) [2024-10-30T17:26:14.622Z] Copying: 244/1024 [MB] (16 MBps) [2024-10-30T17:26:15.559Z] Copying: 254/1024 [MB] (10 MBps) [2024-10-30T17:26:16.931Z] Copying: 270/1024 [MB] (15 MBps) [2024-10-30T17:26:17.864Z] Copying: 289/1024 [MB] (18 MBps) [2024-10-30T17:26:18.798Z] Copying: 308/1024 [MB] (19 MBps) [2024-10-30T17:26:19.751Z] Copying: 327/1024 [MB] (19 MBps) [2024-10-30T17:26:20.687Z] Copying: 346/1024 [MB] (19 MBps) [2024-10-30T17:26:21.622Z] Copying: 365/1024 [MB] (18 MBps) [2024-10-30T17:26:22.555Z] Copying: 383/1024 [MB] (18 MBps) [2024-10-30T17:26:23.929Z] Copying: 403/1024 [MB] (19 MBps) [2024-10-30T17:26:24.863Z] Copying: 422/1024 [MB] (19 MBps) [2024-10-30T17:26:25.798Z] Copying: 441/1024 [MB] (19 MBps) [2024-10-30T17:26:26.732Z] Copying: 460/1024 [MB] (19 MBps) [2024-10-30T17:26:27.666Z] Copying: 479/1024 [MB] (19 MBps) [2024-10-30T17:26:28.600Z] Copying: 498/1024 [MB] (18 MBps) [2024-10-30T17:26:29.534Z] Copying: 517/1024 [MB] (19 MBps) [2024-10-30T17:26:30.992Z] Copying: 536/1024 [MB] (18 MBps) [2024-10-30T17:26:31.559Z] Copying: 547/1024 [MB] (10 MBps) [2024-10-30T17:26:32.933Z] Copying: 561/1024 [MB] (14 MBps) [2024-10-30T17:26:33.871Z] Copying: 579/1024 [MB] (18 MBps) [2024-10-30T17:26:34.811Z] Copying: 598/1024 [MB] (18 MBps) [2024-10-30T17:26:35.743Z] Copying: 610/1024 [MB] (11 MBps) [2024-10-30T17:26:36.677Z] Copying: 629/1024 [MB] (18 MBps) [2024-10-30T17:26:37.610Z] Copying: 647/1024 [MB] (18 MBps) [2024-10-30T17:26:38.543Z] Copying: 665/1024 [MB] (17 MBps) [2024-10-30T17:26:39.916Z] Copying: 683/1024 [MB] (17 MBps) [2024-10-30T17:26:40.852Z] Copying: 701/1024 [MB] (18 MBps) [2024-10-30T17:26:41.795Z] Copying: 719/1024 [MB] (17 MBps) [2024-10-30T17:26:42.738Z] Copying: 731/1024 [MB] (12 MBps) [2024-10-30T17:26:43.676Z] Copying: 742/1024 [MB] (10 MBps) [2024-10-30T17:26:44.610Z] Copying: 755/1024 [MB] (12 MBps) [2024-10-30T17:26:45.544Z] Copying: 773/1024 [MB] (18 MBps) [2024-10-30T17:26:46.919Z] Copying: 791/1024 [MB] (17 MBps) [2024-10-30T17:26:47.860Z] Copying: 809/1024 [MB] (17 MBps) [2024-10-30T17:26:48.803Z] Copying: 823/1024 [MB] (14 MBps) [2024-10-30T17:26:49.744Z] Copying: 837/1024 [MB] (14 MBps) [2024-10-30T17:26:50.677Z] Copying: 847/1024 [MB] (10 MBps) [2024-10-30T17:26:51.618Z] Copying: 867/1024 [MB] (19 MBps) [2024-10-30T17:26:52.553Z] Copying: 877/1024 [MB] (10 MBps) [2024-10-30T17:26:53.927Z] Copying: 895/1024 [MB] (17 MBps) [2024-10-30T17:26:54.869Z] Copying: 918/1024 [MB] (23 MBps) [2024-10-30T17:26:55.805Z] Copying: 932/1024 [MB] (13 MBps) [2024-10-30T17:26:56.748Z] Copying: 945/1024 [MB] (13 MBps) [2024-10-30T17:26:57.689Z] Copying: 957/1024 [MB] (12 MBps) [2024-10-30T17:26:58.626Z] Copying: 968/1024 [MB] (10 MBps) [2024-10-30T17:26:59.609Z] Copying: 980/1024 [MB] (12 MBps) [2024-10-30T17:27:00.551Z] Copying: 993/1024 [MB] (13 MBps) [2024-10-30T17:27:01.936Z] Copying: 1027928/1048576 [kB] (10184 kBps) [2024-10-30T17:27:02.875Z] Copying: 1014/1024 [MB] (10 MBps) [2024-10-30T17:27:02.875Z] Copying: 1048272/1048576 [kB] (9616 kBps) [2024-10-30T17:27:02.875Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-10-30 17:27:02.765445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.894 [2024-10-30 17:27:02.765521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:19.894 [2024-10-30 17:27:02.765538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:19.894 [2024-10-30 17:27:02.765548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.894 [2024-10-30 17:27:02.769575] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:19.894 [2024-10-30 17:27:02.773306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.894 [2024-10-30 17:27:02.773373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:19.894 [2024-10-30 17:27:02.773386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.672 ms 00:23:19.894 [2024-10-30 17:27:02.773395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.894 [2024-10-30 17:27:02.785248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.894 [2024-10-30 17:27:02.785314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:19.894 [2024-10-30 17:27:02.785328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.636 ms 00:23:19.894 [2024-10-30 17:27:02.785336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.894 [2024-10-30 17:27:02.813119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.894 [2024-10-30 17:27:02.813178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:19.894 [2024-10-30 17:27:02.813190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.764 ms 00:23:19.894 [2024-10-30 17:27:02.813212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.894 [2024-10-30 17:27:02.819378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.894 [2024-10-30 17:27:02.819426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:19.894 [2024-10-30 17:27:02.819448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.125 ms 00:23:19.894 [2024-10-30 17:27:02.819456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.894 [2024-10-30 17:27:02.846641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.894 [2024-10-30 17:27:02.846696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:19.894 [2024-10-30 17:27:02.846710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.123 ms 00:23:19.894 [2024-10-30 17:27:02.846719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.894 [2024-10-30 17:27:02.864428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.894 [2024-10-30 17:27:02.864482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:19.894 [2024-10-30 17:27:02.864496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.658 ms 00:23:19.894 [2024-10-30 17:27:02.864505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.467 [2024-10-30 17:27:03.147118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.467 [2024-10-30 17:27:03.147174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:20.467 [2024-10-30 17:27:03.147187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 282.554 ms 00:23:20.467 [2024-10-30 17:27:03.147218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.467 [2024-10-30 17:27:03.173533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.467 [2024-10-30 17:27:03.173586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:20.467 [2024-10-30 17:27:03.173599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.297 ms 00:23:20.467 [2024-10-30 17:27:03.173607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.467 [2024-10-30 17:27:03.199686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.467 [2024-10-30 17:27:03.199737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:20.467 [2024-10-30 17:27:03.199751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.029 ms 00:23:20.467 [2024-10-30 17:27:03.199760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.467 [2024-10-30 17:27:03.225267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.467 [2024-10-30 17:27:03.225317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:20.467 [2024-10-30 17:27:03.225330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.457 ms 00:23:20.467 [2024-10-30 17:27:03.225338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.467 [2024-10-30 17:27:03.251108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.467 [2024-10-30 17:27:03.251156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:20.467 [2024-10-30 17:27:03.251169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.675 ms 00:23:20.467 [2024-10-30 17:27:03.251178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.467 [2024-10-30 17:27:03.251237] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:20.467 [2024-10-30 17:27:03.251255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 102656 / 261120 wr_cnt: 1 state: open 00:23:20.467 [2024-10-30 17:27:03.251267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.251994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.252001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.252009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.252017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.252024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.252033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.252041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:20.467 [2024-10-30 17:27:03.252057] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:20.467 [2024-10-30 17:27:03.252065] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 616d42d0-a4d9-43e9-b1e2-61151db08656 00:23:20.467 [2024-10-30 17:27:03.252074] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 102656 00:23:20.467 [2024-10-30 17:27:03.252083] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 103616 00:23:20.467 [2024-10-30 17:27:03.252106] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 102656 00:23:20.467 [2024-10-30 17:27:03.252116] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0094 00:23:20.467 [2024-10-30 17:27:03.252124] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:20.467 [2024-10-30 17:27:03.252134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:20.467 [2024-10-30 17:27:03.252141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:20.467 [2024-10-30 17:27:03.252148] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:20.467 [2024-10-30 17:27:03.252157] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:20.467 [2024-10-30 17:27:03.252165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.468 [2024-10-30 17:27:03.252173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:20.468 [2024-10-30 17:27:03.252183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.929 ms 00:23:20.468 [2024-10-30 17:27:03.252191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.468 [2024-10-30 17:27:03.265766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.468 [2024-10-30 17:27:03.265812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:20.468 [2024-10-30 17:27:03.265825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.512 ms 00:23:20.468 [2024-10-30 17:27:03.265861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.468 [2024-10-30 17:27:03.266298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.468 [2024-10-30 17:27:03.266319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:20.468 [2024-10-30 17:27:03.266331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:23:20.468 [2024-10-30 17:27:03.266339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.468 [2024-10-30 17:27:03.303006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.468 [2024-10-30 17:27:03.303067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:20.468 [2024-10-30 17:27:03.303080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.468 [2024-10-30 17:27:03.303091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.468 [2024-10-30 17:27:03.303163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.468 [2024-10-30 17:27:03.303173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:20.468 [2024-10-30 17:27:03.303184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.468 [2024-10-30 17:27:03.303194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.468 [2024-10-30 17:27:03.303323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.468 [2024-10-30 17:27:03.303336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:20.468 [2024-10-30 17:27:03.303354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.468 [2024-10-30 17:27:03.303364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.468 [2024-10-30 17:27:03.303383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.468 [2024-10-30 17:27:03.303391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:20.468 [2024-10-30 17:27:03.303399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.468 [2024-10-30 17:27:03.303407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.468 [2024-10-30 17:27:03.388089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.468 [2024-10-30 17:27:03.388148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:20.468 [2024-10-30 17:27:03.388162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.468 [2024-10-30 17:27:03.388171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.728 [2024-10-30 17:27:03.458274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.728 [2024-10-30 17:27:03.458333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:20.728 [2024-10-30 17:27:03.458345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.728 [2024-10-30 17:27:03.458354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.728 [2024-10-30 17:27:03.458417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.728 [2024-10-30 17:27:03.458436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:20.728 [2024-10-30 17:27:03.458446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.728 [2024-10-30 17:27:03.458454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.728 [2024-10-30 17:27:03.458522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.728 [2024-10-30 17:27:03.458534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:20.728 [2024-10-30 17:27:03.458542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.728 [2024-10-30 17:27:03.458551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.728 [2024-10-30 17:27:03.458649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.728 [2024-10-30 17:27:03.458663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:20.728 [2024-10-30 17:27:03.458673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.728 [2024-10-30 17:27:03.458681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.728 [2024-10-30 17:27:03.458712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.728 [2024-10-30 17:27:03.458722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:20.728 [2024-10-30 17:27:03.458731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.728 [2024-10-30 17:27:03.458740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.728 [2024-10-30 17:27:03.458781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.728 [2024-10-30 17:27:03.458791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:20.728 [2024-10-30 17:27:03.458802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.728 [2024-10-30 17:27:03.458810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.728 [2024-10-30 17:27:03.458856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.728 [2024-10-30 17:27:03.458869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:20.728 [2024-10-30 17:27:03.458878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.728 [2024-10-30 17:27:03.458887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.729 [2024-10-30 17:27:03.459023] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 693.573 ms, result 0 00:23:21.670 00:23:21.670 00:23:21.670 17:27:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:23:24.216 17:27:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:24.216 [2024-10-30 17:27:06.764862] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:23:24.217 [2024-10-30 17:27:06.764978] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78309 ] 00:23:24.217 [2024-10-30 17:27:06.927514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.217 [2024-10-30 17:27:07.048260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.478 [2024-10-30 17:27:07.339007] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:24.478 [2024-10-30 17:27:07.339094] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:24.741 [2024-10-30 17:27:07.500414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.741 [2024-10-30 17:27:07.500480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:24.741 [2024-10-30 17:27:07.500499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:24.741 [2024-10-30 17:27:07.500508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.741 [2024-10-30 17:27:07.500561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.741 [2024-10-30 17:27:07.500572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:24.741 [2024-10-30 17:27:07.500583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:24.741 [2024-10-30 17:27:07.500592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.741 [2024-10-30 17:27:07.500613] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:24.741 [2024-10-30 17:27:07.501326] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:24.741 [2024-10-30 17:27:07.501356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.741 [2024-10-30 17:27:07.501366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:24.741 [2024-10-30 17:27:07.501375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:23:24.741 [2024-10-30 17:27:07.501383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.741 [2024-10-30 17:27:07.503105] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:24.741 [2024-10-30 17:27:07.517677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.741 [2024-10-30 17:27:07.517730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:24.741 [2024-10-30 17:27:07.517745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.574 ms 00:23:24.741 [2024-10-30 17:27:07.517754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.741 [2024-10-30 17:27:07.517852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.741 [2024-10-30 17:27:07.517866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:24.741 [2024-10-30 17:27:07.517876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:24.741 [2024-10-30 17:27:07.517884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.741 [2024-10-30 17:27:07.526422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.741 [2024-10-30 17:27:07.526470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:24.741 [2024-10-30 17:27:07.526482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.454 ms 00:23:24.741 [2024-10-30 17:27:07.526491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.741 [2024-10-30 17:27:07.526585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.741 [2024-10-30 17:27:07.526595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:24.741 [2024-10-30 17:27:07.526604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:24.741 [2024-10-30 17:27:07.526613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.741 [2024-10-30 17:27:07.526659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.741 [2024-10-30 17:27:07.526670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:24.741 [2024-10-30 17:27:07.526679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:24.741 [2024-10-30 17:27:07.526686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.741 [2024-10-30 17:27:07.526709] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:24.741 [2024-10-30 17:27:07.530750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.741 [2024-10-30 17:27:07.530792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:24.741 [2024-10-30 17:27:07.530804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.045 ms 00:23:24.741 [2024-10-30 17:27:07.530816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.741 [2024-10-30 17:27:07.530853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.741 [2024-10-30 17:27:07.530863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:24.741 [2024-10-30 17:27:07.530871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:24.741 [2024-10-30 17:27:07.530880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.741 [2024-10-30 17:27:07.530937] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:24.741 [2024-10-30 17:27:07.530961] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:24.741 [2024-10-30 17:27:07.530999] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:24.741 [2024-10-30 17:27:07.531020] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:24.741 [2024-10-30 17:27:07.531126] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:24.741 [2024-10-30 17:27:07.531136] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:24.741 [2024-10-30 17:27:07.531148] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:24.741 [2024-10-30 17:27:07.531159] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:24.741 [2024-10-30 17:27:07.531169] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:24.741 [2024-10-30 17:27:07.531177] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:24.741 [2024-10-30 17:27:07.531185] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:24.741 [2024-10-30 17:27:07.531193] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:24.741 [2024-10-30 17:27:07.531216] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:24.741 [2024-10-30 17:27:07.531229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.741 [2024-10-30 17:27:07.531238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:24.741 [2024-10-30 17:27:07.531246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:23:24.741 [2024-10-30 17:27:07.531253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.741 [2024-10-30 17:27:07.531338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.741 [2024-10-30 17:27:07.531347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:24.741 [2024-10-30 17:27:07.531355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:24.741 [2024-10-30 17:27:07.531363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.741 [2024-10-30 17:27:07.531469] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:24.741 [2024-10-30 17:27:07.531490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:24.741 [2024-10-30 17:27:07.531499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:24.741 [2024-10-30 17:27:07.531508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.741 [2024-10-30 17:27:07.531517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:24.741 [2024-10-30 17:27:07.531524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:24.741 [2024-10-30 17:27:07.531532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:24.741 [2024-10-30 17:27:07.531539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:24.741 [2024-10-30 17:27:07.531547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:24.741 [2024-10-30 17:27:07.531554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:24.741 [2024-10-30 17:27:07.531561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:24.742 [2024-10-30 17:27:07.531569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:24.742 [2024-10-30 17:27:07.531577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:24.742 [2024-10-30 17:27:07.531584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:24.742 [2024-10-30 17:27:07.531592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:24.742 [2024-10-30 17:27:07.531606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.742 [2024-10-30 17:27:07.531614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:24.742 [2024-10-30 17:27:07.531621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:24.742 [2024-10-30 17:27:07.531628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.742 [2024-10-30 17:27:07.531635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:24.742 [2024-10-30 17:27:07.531643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:24.742 [2024-10-30 17:27:07.531650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:24.742 [2024-10-30 17:27:07.531657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:24.742 [2024-10-30 17:27:07.531663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:24.742 [2024-10-30 17:27:07.531670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:24.742 [2024-10-30 17:27:07.531676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:24.742 [2024-10-30 17:27:07.531683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:24.742 [2024-10-30 17:27:07.531690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:24.742 [2024-10-30 17:27:07.531697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:24.742 [2024-10-30 17:27:07.531704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:24.742 [2024-10-30 17:27:07.531711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:24.742 [2024-10-30 17:27:07.531719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:24.742 [2024-10-30 17:27:07.531725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:24.742 [2024-10-30 17:27:07.531732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:24.742 [2024-10-30 17:27:07.531739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:24.742 [2024-10-30 17:27:07.531746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:24.742 [2024-10-30 17:27:07.531753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:24.742 [2024-10-30 17:27:07.531760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:24.742 [2024-10-30 17:27:07.531768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:24.742 [2024-10-30 17:27:07.531775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.742 [2024-10-30 17:27:07.531783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:24.742 [2024-10-30 17:27:07.531789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:24.742 [2024-10-30 17:27:07.531797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.742 [2024-10-30 17:27:07.531805] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:24.742 [2024-10-30 17:27:07.531814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:24.742 [2024-10-30 17:27:07.531822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:24.742 [2024-10-30 17:27:07.531830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.742 [2024-10-30 17:27:07.531839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:24.742 [2024-10-30 17:27:07.531847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:24.742 [2024-10-30 17:27:07.531854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:24.742 [2024-10-30 17:27:07.531862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:24.742 [2024-10-30 17:27:07.531869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:24.742 [2024-10-30 17:27:07.531876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:24.742 [2024-10-30 17:27:07.531885] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:24.742 [2024-10-30 17:27:07.531894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:24.742 [2024-10-30 17:27:07.531903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:24.742 [2024-10-30 17:27:07.531911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:24.742 [2024-10-30 17:27:07.531920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:24.742 [2024-10-30 17:27:07.531927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:24.742 [2024-10-30 17:27:07.531936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:24.742 [2024-10-30 17:27:07.531943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:24.742 [2024-10-30 17:27:07.531950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:24.742 [2024-10-30 17:27:07.531958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:24.742 [2024-10-30 17:27:07.531965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:24.742 [2024-10-30 17:27:07.531973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:24.742 [2024-10-30 17:27:07.531980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:24.742 [2024-10-30 17:27:07.531987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:24.742 [2024-10-30 17:27:07.531994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:24.742 [2024-10-30 17:27:07.532001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:24.742 [2024-10-30 17:27:07.532008] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:24.742 [2024-10-30 17:27:07.532016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:24.742 [2024-10-30 17:27:07.532027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:24.742 [2024-10-30 17:27:07.532034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:24.742 [2024-10-30 17:27:07.532041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:24.742 [2024-10-30 17:27:07.532048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:24.742 [2024-10-30 17:27:07.532059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.742 [2024-10-30 17:27:07.532067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:24.742 [2024-10-30 17:27:07.532075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:23:24.742 [2024-10-30 17:27:07.532083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.742 [2024-10-30 17:27:07.564862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.742 [2024-10-30 17:27:07.564920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:24.742 [2024-10-30 17:27:07.564933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.733 ms 00:23:24.742 [2024-10-30 17:27:07.564941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.742 [2024-10-30 17:27:07.565035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.742 [2024-10-30 17:27:07.565049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:24.742 [2024-10-30 17:27:07.565058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:24.742 [2024-10-30 17:27:07.565068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.742 [2024-10-30 17:27:07.615361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.742 [2024-10-30 17:27:07.615419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:24.742 [2024-10-30 17:27:07.615433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.228 ms 00:23:24.742 [2024-10-30 17:27:07.615442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.742 [2024-10-30 17:27:07.615493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.742 [2024-10-30 17:27:07.615503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:24.742 [2024-10-30 17:27:07.615513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:24.742 [2024-10-30 17:27:07.615526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.742 [2024-10-30 17:27:07.616143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.742 [2024-10-30 17:27:07.616189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:24.742 [2024-10-30 17:27:07.616219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:23:24.742 [2024-10-30 17:27:07.616229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.742 [2024-10-30 17:27:07.616393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.742 [2024-10-30 17:27:07.616404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:24.742 [2024-10-30 17:27:07.616413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:23:24.742 [2024-10-30 17:27:07.616421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.742 [2024-10-30 17:27:07.632480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.742 [2024-10-30 17:27:07.632531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:24.742 [2024-10-30 17:27:07.632543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.031 ms 00:23:24.742 [2024-10-30 17:27:07.632556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.742 [2024-10-30 17:27:07.647142] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:24.742 [2024-10-30 17:27:07.647197] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:24.742 [2024-10-30 17:27:07.647223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.742 [2024-10-30 17:27:07.647232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:24.742 [2024-10-30 17:27:07.647242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.550 ms 00:23:24.742 [2024-10-30 17:27:07.647250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.742 [2024-10-30 17:27:07.673634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.742 [2024-10-30 17:27:07.673714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:24.742 [2024-10-30 17:27:07.673727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.324 ms 00:23:24.743 [2024-10-30 17:27:07.673735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.743 [2024-10-30 17:27:07.687232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.743 [2024-10-30 17:27:07.687296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:24.743 [2024-10-30 17:27:07.687308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.436 ms 00:23:24.743 [2024-10-30 17:27:07.687316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.743 [2024-10-30 17:27:07.700398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.743 [2024-10-30 17:27:07.700448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:24.743 [2024-10-30 17:27:07.700461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.030 ms 00:23:24.743 [2024-10-30 17:27:07.700469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.743 [2024-10-30 17:27:07.701122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.743 [2024-10-30 17:27:07.701155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:24.743 [2024-10-30 17:27:07.701167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:23:24.743 [2024-10-30 17:27:07.701175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.003 [2024-10-30 17:27:07.769909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.003 [2024-10-30 17:27:07.769974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:25.003 [2024-10-30 17:27:07.769991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.709 ms 00:23:25.003 [2024-10-30 17:27:07.770008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.003 [2024-10-30 17:27:07.781297] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:25.003 [2024-10-30 17:27:07.784785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.003 [2024-10-30 17:27:07.784832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:25.003 [2024-10-30 17:27:07.784844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.718 ms 00:23:25.003 [2024-10-30 17:27:07.784855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.003 [2024-10-30 17:27:07.784945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.003 [2024-10-30 17:27:07.784957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:25.003 [2024-10-30 17:27:07.784967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:25.003 [2024-10-30 17:27:07.784975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.003 [2024-10-30 17:27:07.786690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.003 [2024-10-30 17:27:07.786742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:25.003 [2024-10-30 17:27:07.786753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.672 ms 00:23:25.003 [2024-10-30 17:27:07.786762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.003 [2024-10-30 17:27:07.786793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.003 [2024-10-30 17:27:07.786803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:25.003 [2024-10-30 17:27:07.786812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:25.003 [2024-10-30 17:27:07.786821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.003 [2024-10-30 17:27:07.786866] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:25.003 [2024-10-30 17:27:07.786880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.003 [2024-10-30 17:27:07.786889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:25.003 [2024-10-30 17:27:07.786898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:25.003 [2024-10-30 17:27:07.786905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.003 [2024-10-30 17:27:07.813509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.003 [2024-10-30 17:27:07.813565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:25.003 [2024-10-30 17:27:07.813579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.584 ms 00:23:25.003 [2024-10-30 17:27:07.813588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.003 [2024-10-30 17:27:07.813689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.003 [2024-10-30 17:27:07.813700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:25.003 [2024-10-30 17:27:07.813710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:23:25.003 [2024-10-30 17:27:07.813718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.003 [2024-10-30 17:27:07.815038] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 314.125 ms, result 0 00:23:26.388  [2024-10-30T17:27:10.309Z] Copying: 1296/1048576 [kB] (1296 kBps) [2024-10-30T17:27:11.252Z] Copying: 4480/1048576 [kB] (3184 kBps) [2024-10-30T17:27:12.196Z] Copying: 17/1024 [MB] (13 MBps) [2024-10-30T17:27:13.140Z] Copying: 48/1024 [MB] (30 MBps) [2024-10-30T17:27:14.080Z] Copying: 73/1024 [MB] (25 MBps) [2024-10-30T17:27:15.026Z] Copying: 100/1024 [MB] (26 MBps) [2024-10-30T17:27:16.415Z] Copying: 129/1024 [MB] (29 MBps) [2024-10-30T17:27:17.361Z] Copying: 157/1024 [MB] (28 MBps) [2024-10-30T17:27:18.305Z] Copying: 184/1024 [MB] (26 MBps) [2024-10-30T17:27:19.251Z] Copying: 205/1024 [MB] (21 MBps) [2024-10-30T17:27:20.196Z] Copying: 238/1024 [MB] (32 MBps) [2024-10-30T17:27:21.140Z] Copying: 265/1024 [MB] (26 MBps) [2024-10-30T17:27:22.084Z] Copying: 294/1024 [MB] (29 MBps) [2024-10-30T17:27:23.029Z] Copying: 312/1024 [MB] (18 MBps) [2024-10-30T17:27:24.418Z] Copying: 342/1024 [MB] (29 MBps) [2024-10-30T17:27:25.366Z] Copying: 388/1024 [MB] (46 MBps) [2024-10-30T17:27:26.309Z] Copying: 420/1024 [MB] (31 MBps) [2024-10-30T17:27:27.254Z] Copying: 437/1024 [MB] (16 MBps) [2024-10-30T17:27:28.200Z] Copying: 459/1024 [MB] (22 MBps) [2024-10-30T17:27:29.232Z] Copying: 475/1024 [MB] (15 MBps) [2024-10-30T17:27:30.173Z] Copying: 493/1024 [MB] (17 MBps) [2024-10-30T17:27:31.117Z] Copying: 520/1024 [MB] (27 MBps) [2024-10-30T17:27:32.058Z] Copying: 554/1024 [MB] (33 MBps) [2024-10-30T17:27:33.445Z] Copying: 591/1024 [MB] (36 MBps) [2024-10-30T17:27:34.017Z] Copying: 617/1024 [MB] (26 MBps) [2024-10-30T17:27:35.406Z] Copying: 641/1024 [MB] (24 MBps) [2024-10-30T17:27:36.350Z] Copying: 667/1024 [MB] (26 MBps) [2024-10-30T17:27:37.294Z] Copying: 701/1024 [MB] (33 MBps) [2024-10-30T17:27:38.237Z] Copying: 733/1024 [MB] (32 MBps) [2024-10-30T17:27:39.180Z] Copying: 752/1024 [MB] (18 MBps) [2024-10-30T17:27:40.124Z] Copying: 791/1024 [MB] (39 MBps) [2024-10-30T17:27:41.069Z] Copying: 830/1024 [MB] (38 MBps) [2024-10-30T17:27:42.014Z] Copying: 864/1024 [MB] (34 MBps) [2024-10-30T17:27:43.400Z] Copying: 905/1024 [MB] (40 MBps) [2024-10-30T17:27:44.343Z] Copying: 940/1024 [MB] (35 MBps) [2024-10-30T17:27:45.286Z] Copying: 969/1024 [MB] (29 MBps) [2024-10-30T17:27:45.286Z] Copying: 1014/1024 [MB] (44 MBps) [2024-10-30T17:27:45.859Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-10-30 17:27:45.833874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.878 [2024-10-30 17:27:45.834055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:02.878 [2024-10-30 17:27:45.834102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:02.878 [2024-10-30 17:27:45.834113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.878 [2024-10-30 17:27:45.834141] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:02.878 [2024-10-30 17:27:45.837763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.878 [2024-10-30 17:27:45.837817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:02.878 [2024-10-30 17:27:45.837842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.601 ms 00:24:02.878 [2024-10-30 17:27:45.837852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.878 [2024-10-30 17:27:45.838131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.878 [2024-10-30 17:27:45.838145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:02.878 [2024-10-30 17:27:45.838156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:24:02.878 [2024-10-30 17:27:45.838170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.878 [2024-10-30 17:27:45.851369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.878 [2024-10-30 17:27:45.851425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:02.878 [2024-10-30 17:27:45.851438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.179 ms 00:24:02.878 [2024-10-30 17:27:45.851448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.878 [2024-10-30 17:27:45.858010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.878 [2024-10-30 17:27:45.858063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:02.878 [2024-10-30 17:27:45.858075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.522 ms 00:24:02.878 [2024-10-30 17:27:45.858092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.140 [2024-10-30 17:27:45.885396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.140 [2024-10-30 17:27:45.885449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:03.140 [2024-10-30 17:27:45.885463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.241 ms 00:24:03.140 [2024-10-30 17:27:45.885472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.140 [2024-10-30 17:27:45.901508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.140 [2024-10-30 17:27:45.901560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:03.140 [2024-10-30 17:27:45.901573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.989 ms 00:24:03.140 [2024-10-30 17:27:45.901582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.140 [2024-10-30 17:27:45.905840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.140 [2024-10-30 17:27:45.905887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:03.140 [2024-10-30 17:27:45.905900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.205 ms 00:24:03.140 [2024-10-30 17:27:45.905909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.140 [2024-10-30 17:27:45.931999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.140 [2024-10-30 17:27:45.932047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:03.140 [2024-10-30 17:27:45.932059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.074 ms 00:24:03.140 [2024-10-30 17:27:45.932067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.140 [2024-10-30 17:27:45.957860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.140 [2024-10-30 17:27:45.957907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:03.140 [2024-10-30 17:27:45.957932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.747 ms 00:24:03.140 [2024-10-30 17:27:45.957941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.140 [2024-10-30 17:27:45.982632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.140 [2024-10-30 17:27:45.982679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:03.140 [2024-10-30 17:27:45.982691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.643 ms 00:24:03.140 [2024-10-30 17:27:45.982699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.140 [2024-10-30 17:27:46.007063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.140 [2024-10-30 17:27:46.007111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:03.140 [2024-10-30 17:27:46.007122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.288 ms 00:24:03.140 [2024-10-30 17:27:46.007130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.141 [2024-10-30 17:27:46.007176] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:03.141 [2024-10-30 17:27:46.007193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:03.141 [2024-10-30 17:27:46.007217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:24:03.141 [2024-10-30 17:27:46.007226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:03.141 [2024-10-30 17:27:46.007936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:03.142 [2024-10-30 17:27:46.007944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:03.142 [2024-10-30 17:27:46.007953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:03.142 [2024-10-30 17:27:46.007961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:03.142 [2024-10-30 17:27:46.007970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:03.142 [2024-10-30 17:27:46.007977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:03.142 [2024-10-30 17:27:46.007985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:03.142 [2024-10-30 17:27:46.007992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:03.142 [2024-10-30 17:27:46.008008] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:03.142 [2024-10-30 17:27:46.008017] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 616d42d0-a4d9-43e9-b1e2-61151db08656 00:24:03.142 [2024-10-30 17:27:46.008026] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:24:03.142 [2024-10-30 17:27:46.008034] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 161984 00:24:03.142 [2024-10-30 17:27:46.008042] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 160000 00:24:03.142 [2024-10-30 17:27:46.008051] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0124 00:24:03.142 [2024-10-30 17:27:46.008061] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:03.142 [2024-10-30 17:27:46.008070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:03.142 [2024-10-30 17:27:46.008077] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:03.142 [2024-10-30 17:27:46.008091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:03.142 [2024-10-30 17:27:46.008098] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:03.142 [2024-10-30 17:27:46.008105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.142 [2024-10-30 17:27:46.008114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:03.142 [2024-10-30 17:27:46.008122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:24:03.142 [2024-10-30 17:27:46.008130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.142 [2024-10-30 17:27:46.021393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.142 [2024-10-30 17:27:46.021439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:03.142 [2024-10-30 17:27:46.021458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.239 ms 00:24:03.142 [2024-10-30 17:27:46.021465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.142 [2024-10-30 17:27:46.021895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.142 [2024-10-30 17:27:46.021908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:03.142 [2024-10-30 17:27:46.021919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:24:03.142 [2024-10-30 17:27:46.021927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.142 [2024-10-30 17:27:46.058337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.142 [2024-10-30 17:27:46.058386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:03.142 [2024-10-30 17:27:46.058398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.142 [2024-10-30 17:27:46.058406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.142 [2024-10-30 17:27:46.058473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.142 [2024-10-30 17:27:46.058482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:03.142 [2024-10-30 17:27:46.058491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.142 [2024-10-30 17:27:46.058499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.142 [2024-10-30 17:27:46.058584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.142 [2024-10-30 17:27:46.058599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:03.142 [2024-10-30 17:27:46.058607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.142 [2024-10-30 17:27:46.058616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.142 [2024-10-30 17:27:46.058632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.142 [2024-10-30 17:27:46.058640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:03.142 [2024-10-30 17:27:46.058647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.142 [2024-10-30 17:27:46.058655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.404 [2024-10-30 17:27:46.142171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.404 [2024-10-30 17:27:46.142279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:03.404 [2024-10-30 17:27:46.142292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.404 [2024-10-30 17:27:46.142301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.404 [2024-10-30 17:27:46.210873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.404 [2024-10-30 17:27:46.210933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:03.404 [2024-10-30 17:27:46.210946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.404 [2024-10-30 17:27:46.210955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.404 [2024-10-30 17:27:46.211017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.404 [2024-10-30 17:27:46.211027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:03.404 [2024-10-30 17:27:46.211037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.404 [2024-10-30 17:27:46.211053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.404 [2024-10-30 17:27:46.211115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.404 [2024-10-30 17:27:46.211126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:03.404 [2024-10-30 17:27:46.211135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.404 [2024-10-30 17:27:46.211143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.404 [2024-10-30 17:27:46.211268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.404 [2024-10-30 17:27:46.211279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:03.404 [2024-10-30 17:27:46.211288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.404 [2024-10-30 17:27:46.211299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.404 [2024-10-30 17:27:46.211332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.404 [2024-10-30 17:27:46.211341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:03.404 [2024-10-30 17:27:46.211350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.404 [2024-10-30 17:27:46.211358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.404 [2024-10-30 17:27:46.211399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.404 [2024-10-30 17:27:46.211408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:03.404 [2024-10-30 17:27:46.211417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.404 [2024-10-30 17:27:46.211425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.404 [2024-10-30 17:27:46.211474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.404 [2024-10-30 17:27:46.211484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:03.404 [2024-10-30 17:27:46.211492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.404 [2024-10-30 17:27:46.211500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.404 [2024-10-30 17:27:46.211632] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 377.726 ms, result 0 00:24:04.347 00:24:04.347 00:24:04.348 17:27:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:06.261 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:06.261 17:27:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:06.261 [2024-10-30 17:27:49.060627] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:24:06.261 [2024-10-30 17:27:49.060831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78740 ] 00:24:06.261 [2024-10-30 17:27:49.215857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.520 [2024-10-30 17:27:49.317761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.781 [2024-10-30 17:27:49.608541] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:06.781 [2024-10-30 17:27:49.608612] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.041 [2024-10-30 17:27:49.770020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.042 [2024-10-30 17:27:49.770074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:07.042 [2024-10-30 17:27:49.770092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:07.042 [2024-10-30 17:27:49.770101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.042 [2024-10-30 17:27:49.770155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.042 [2024-10-30 17:27:49.770166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:07.042 [2024-10-30 17:27:49.770179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:07.042 [2024-10-30 17:27:49.770187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.042 [2024-10-30 17:27:49.770223] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:07.042 [2024-10-30 17:27:49.770926] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:07.042 [2024-10-30 17:27:49.770955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.042 [2024-10-30 17:27:49.770964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:07.042 [2024-10-30 17:27:49.770973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:24:07.042 [2024-10-30 17:27:49.770981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.042 [2024-10-30 17:27:49.772688] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:07.042 [2024-10-30 17:27:49.787108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.042 [2024-10-30 17:27:49.787151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:07.042 [2024-10-30 17:27:49.787164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.421 ms 00:24:07.042 [2024-10-30 17:27:49.787173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.042 [2024-10-30 17:27:49.787259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.042 [2024-10-30 17:27:49.787273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:07.042 [2024-10-30 17:27:49.787282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:07.042 [2024-10-30 17:27:49.787290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.042 [2024-10-30 17:27:49.795326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.042 [2024-10-30 17:27:49.795359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:07.042 [2024-10-30 17:27:49.795371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.958 ms 00:24:07.042 [2024-10-30 17:27:49.795379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.042 [2024-10-30 17:27:49.795468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.042 [2024-10-30 17:27:49.795477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:07.042 [2024-10-30 17:27:49.795485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:24:07.042 [2024-10-30 17:27:49.795493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.042 [2024-10-30 17:27:49.795536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.042 [2024-10-30 17:27:49.795547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:07.042 [2024-10-30 17:27:49.795555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:07.042 [2024-10-30 17:27:49.795563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.042 [2024-10-30 17:27:49.795585] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:07.042 [2024-10-30 17:27:49.799749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.042 [2024-10-30 17:27:49.799781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:07.042 [2024-10-30 17:27:49.799791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.168 ms 00:24:07.042 [2024-10-30 17:27:49.799803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.042 [2024-10-30 17:27:49.799837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.042 [2024-10-30 17:27:49.799845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:07.042 [2024-10-30 17:27:49.799854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:07.042 [2024-10-30 17:27:49.799862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.042 [2024-10-30 17:27:49.799910] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:07.042 [2024-10-30 17:27:49.799933] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:07.042 [2024-10-30 17:27:49.799970] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:07.042 [2024-10-30 17:27:49.799989] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:07.042 [2024-10-30 17:27:49.800095] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:07.042 [2024-10-30 17:27:49.800106] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:07.042 [2024-10-30 17:27:49.800118] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:07.042 [2024-10-30 17:27:49.800129] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:07.042 [2024-10-30 17:27:49.800137] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:07.042 [2024-10-30 17:27:49.800146] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:07.042 [2024-10-30 17:27:49.800154] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:07.042 [2024-10-30 17:27:49.800162] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:07.042 [2024-10-30 17:27:49.800170] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:07.042 [2024-10-30 17:27:49.800181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.042 [2024-10-30 17:27:49.800189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:07.042 [2024-10-30 17:27:49.800212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:24:07.042 [2024-10-30 17:27:49.800221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.042 [2024-10-30 17:27:49.800304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.042 [2024-10-30 17:27:49.800313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:07.042 [2024-10-30 17:27:49.800321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:07.042 [2024-10-30 17:27:49.800329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.042 [2024-10-30 17:27:49.800433] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:07.042 [2024-10-30 17:27:49.800447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:07.042 [2024-10-30 17:27:49.800456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:07.042 [2024-10-30 17:27:49.800464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.042 [2024-10-30 17:27:49.800472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:07.042 [2024-10-30 17:27:49.800479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:07.042 [2024-10-30 17:27:49.800486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:07.042 [2024-10-30 17:27:49.800495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:07.042 [2024-10-30 17:27:49.800502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:07.042 [2024-10-30 17:27:49.800509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:07.042 [2024-10-30 17:27:49.800516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:07.042 [2024-10-30 17:27:49.800523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:07.042 [2024-10-30 17:27:49.800530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:07.042 [2024-10-30 17:27:49.800540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:07.042 [2024-10-30 17:27:49.800547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:07.042 [2024-10-30 17:27:49.800560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.042 [2024-10-30 17:27:49.800567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:07.042 [2024-10-30 17:27:49.800575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:07.042 [2024-10-30 17:27:49.800582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.042 [2024-10-30 17:27:49.800589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:07.042 [2024-10-30 17:27:49.800596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:07.042 [2024-10-30 17:27:49.800603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.042 [2024-10-30 17:27:49.800609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:07.042 [2024-10-30 17:27:49.800617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:07.042 [2024-10-30 17:27:49.800624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.042 [2024-10-30 17:27:49.800631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:07.042 [2024-10-30 17:27:49.800638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:07.042 [2024-10-30 17:27:49.800644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.042 [2024-10-30 17:27:49.800651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:07.042 [2024-10-30 17:27:49.800658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:07.042 [2024-10-30 17:27:49.800664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.042 [2024-10-30 17:27:49.800671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:07.042 [2024-10-30 17:27:49.800677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:07.042 [2024-10-30 17:27:49.800684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:07.042 [2024-10-30 17:27:49.800691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:07.042 [2024-10-30 17:27:49.800697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:07.042 [2024-10-30 17:27:49.800704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:07.042 [2024-10-30 17:27:49.800710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:07.042 [2024-10-30 17:27:49.800717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:07.042 [2024-10-30 17:27:49.800723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.042 [2024-10-30 17:27:49.800730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:07.042 [2024-10-30 17:27:49.800736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:07.042 [2024-10-30 17:27:49.800743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.042 [2024-10-30 17:27:49.800752] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:07.042 [2024-10-30 17:27:49.800762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:07.043 [2024-10-30 17:27:49.800770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:07.043 [2024-10-30 17:27:49.800778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.043 [2024-10-30 17:27:49.800786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:07.043 [2024-10-30 17:27:49.800793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:07.043 [2024-10-30 17:27:49.800800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:07.043 [2024-10-30 17:27:49.800808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:07.043 [2024-10-30 17:27:49.800814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:07.043 [2024-10-30 17:27:49.800821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:07.043 [2024-10-30 17:27:49.800829] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:07.043 [2024-10-30 17:27:49.800838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.043 [2024-10-30 17:27:49.800847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:07.043 [2024-10-30 17:27:49.800854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:07.043 [2024-10-30 17:27:49.800862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:07.043 [2024-10-30 17:27:49.800868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:07.043 [2024-10-30 17:27:49.800875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:07.043 [2024-10-30 17:27:49.800883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:07.043 [2024-10-30 17:27:49.800889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:07.043 [2024-10-30 17:27:49.800897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:07.043 [2024-10-30 17:27:49.800904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:07.043 [2024-10-30 17:27:49.800911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:07.043 [2024-10-30 17:27:49.800917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:07.043 [2024-10-30 17:27:49.800924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:07.043 [2024-10-30 17:27:49.800931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:07.043 [2024-10-30 17:27:49.800939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:07.043 [2024-10-30 17:27:49.800946] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:07.043 [2024-10-30 17:27:49.800954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.043 [2024-10-30 17:27:49.800965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:07.043 [2024-10-30 17:27:49.800972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:07.043 [2024-10-30 17:27:49.800979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:07.043 [2024-10-30 17:27:49.800986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:07.043 [2024-10-30 17:27:49.800994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.043 [2024-10-30 17:27:49.801002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:07.043 [2024-10-30 17:27:49.801010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:24:07.043 [2024-10-30 17:27:49.801018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.043 [2024-10-30 17:27:49.832930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.043 [2024-10-30 17:27:49.832972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:07.043 [2024-10-30 17:27:49.832983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.868 ms 00:24:07.043 [2024-10-30 17:27:49.832991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.043 [2024-10-30 17:27:49.833083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.043 [2024-10-30 17:27:49.833097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:07.043 [2024-10-30 17:27:49.833106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:07.043 [2024-10-30 17:27:49.833114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.043 [2024-10-30 17:27:49.886491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.043 [2024-10-30 17:27:49.886552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:07.043 [2024-10-30 17:27:49.886566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.321 ms 00:24:07.043 [2024-10-30 17:27:49.886575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.043 [2024-10-30 17:27:49.886624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.043 [2024-10-30 17:27:49.886634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:07.043 [2024-10-30 17:27:49.886643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:07.043 [2024-10-30 17:27:49.886655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.043 [2024-10-30 17:27:49.887281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.043 [2024-10-30 17:27:49.887311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:07.043 [2024-10-30 17:27:49.887323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:24:07.043 [2024-10-30 17:27:49.887332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.043 [2024-10-30 17:27:49.887488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.043 [2024-10-30 17:27:49.887498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:07.043 [2024-10-30 17:27:49.887507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:24:07.043 [2024-10-30 17:27:49.887515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.043 [2024-10-30 17:27:49.903352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.043 [2024-10-30 17:27:49.903391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:07.043 [2024-10-30 17:27:49.903402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.809 ms 00:24:07.043 [2024-10-30 17:27:49.903414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.043 [2024-10-30 17:27:49.917699] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:07.043 [2024-10-30 17:27:49.917743] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:07.043 [2024-10-30 17:27:49.917757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.043 [2024-10-30 17:27:49.917765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:07.043 [2024-10-30 17:27:49.917775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.230 ms 00:24:07.043 [2024-10-30 17:27:49.917782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.043 [2024-10-30 17:27:49.943458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.043 [2024-10-30 17:27:49.943504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:07.043 [2024-10-30 17:27:49.943516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.603 ms 00:24:07.043 [2024-10-30 17:27:49.943524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.043 [2024-10-30 17:27:49.956436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.043 [2024-10-30 17:27:49.956472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:07.043 [2024-10-30 17:27:49.956484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.859 ms 00:24:07.043 [2024-10-30 17:27:49.956491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.043 [2024-10-30 17:27:49.968824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.043 [2024-10-30 17:27:49.968863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:07.043 [2024-10-30 17:27:49.968875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.287 ms 00:24:07.043 [2024-10-30 17:27:49.968883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.043 [2024-10-30 17:27:49.969570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.043 [2024-10-30 17:27:49.969596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:07.043 [2024-10-30 17:27:49.969606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:24:07.043 [2024-10-30 17:27:49.969615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.303 [2024-10-30 17:27:50.037077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.303 [2024-10-30 17:27:50.037137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:07.303 [2024-10-30 17:27:50.037154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.436 ms 00:24:07.303 [2024-10-30 17:27:50.037171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.303 [2024-10-30 17:27:50.048589] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:07.303 [2024-10-30 17:27:50.052607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.303 [2024-10-30 17:27:50.052647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:07.303 [2024-10-30 17:27:50.052662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.326 ms 00:24:07.303 [2024-10-30 17:27:50.052671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.303 [2024-10-30 17:27:50.052784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.303 [2024-10-30 17:27:50.052796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:07.303 [2024-10-30 17:27:50.052806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:07.303 [2024-10-30 17:27:50.052815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.303 [2024-10-30 17:27:50.053695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.303 [2024-10-30 17:27:50.053737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:07.303 [2024-10-30 17:27:50.053750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:24:07.303 [2024-10-30 17:27:50.053760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.303 [2024-10-30 17:27:50.053792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.303 [2024-10-30 17:27:50.053802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:07.303 [2024-10-30 17:27:50.053812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:07.303 [2024-10-30 17:27:50.053835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.303 [2024-10-30 17:27:50.053885] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:07.303 [2024-10-30 17:27:50.053900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.303 [2024-10-30 17:27:50.053910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:07.303 [2024-10-30 17:27:50.053922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:07.303 [2024-10-30 17:27:50.053931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.303 [2024-10-30 17:27:50.080132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.303 [2024-10-30 17:27:50.080177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:07.303 [2024-10-30 17:27:50.080191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.178 ms 00:24:07.303 [2024-10-30 17:27:50.080209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.303 [2024-10-30 17:27:50.080306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.303 [2024-10-30 17:27:50.080317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:07.304 [2024-10-30 17:27:50.080327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:24:07.304 [2024-10-30 17:27:50.080336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.304 [2024-10-30 17:27:50.081660] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 311.120 ms, result 0 00:24:08.692  [2024-10-30T17:27:52.615Z] Copying: 20/1024 [MB] (20 MBps) [2024-10-30T17:27:53.559Z] Copying: 42/1024 [MB] (21 MBps) [2024-10-30T17:27:54.504Z] Copying: 60/1024 [MB] (17 MBps) [2024-10-30T17:27:55.447Z] Copying: 79/1024 [MB] (19 MBps) [2024-10-30T17:27:56.391Z] Copying: 97/1024 [MB] (18 MBps) [2024-10-30T17:27:57.397Z] Copying: 109/1024 [MB] (11 MBps) [2024-10-30T17:27:58.339Z] Copying: 120/1024 [MB] (11 MBps) [2024-10-30T17:27:59.281Z] Copying: 132/1024 [MB] (11 MBps) [2024-10-30T17:28:00.665Z] Copying: 144/1024 [MB] (11 MBps) [2024-10-30T17:28:01.609Z] Copying: 158/1024 [MB] (14 MBps) [2024-10-30T17:28:02.552Z] Copying: 177/1024 [MB] (18 MBps) [2024-10-30T17:28:03.497Z] Copying: 190/1024 [MB] (12 MBps) [2024-10-30T17:28:04.439Z] Copying: 205/1024 [MB] (15 MBps) [2024-10-30T17:28:05.384Z] Copying: 219/1024 [MB] (14 MBps) [2024-10-30T17:28:06.328Z] Copying: 242/1024 [MB] (22 MBps) [2024-10-30T17:28:07.272Z] Copying: 257/1024 [MB] (15 MBps) [2024-10-30T17:28:08.661Z] Copying: 273/1024 [MB] (15 MBps) [2024-10-30T17:28:09.603Z] Copying: 292/1024 [MB] (19 MBps) [2024-10-30T17:28:10.545Z] Copying: 313/1024 [MB] (20 MBps) [2024-10-30T17:28:11.490Z] Copying: 333/1024 [MB] (19 MBps) [2024-10-30T17:28:12.436Z] Copying: 349/1024 [MB] (16 MBps) [2024-10-30T17:28:13.380Z] Copying: 371/1024 [MB] (21 MBps) [2024-10-30T17:28:14.324Z] Copying: 387/1024 [MB] (16 MBps) [2024-10-30T17:28:15.268Z] Copying: 398/1024 [MB] (11 MBps) [2024-10-30T17:28:16.655Z] Copying: 414/1024 [MB] (15 MBps) [2024-10-30T17:28:17.599Z] Copying: 425/1024 [MB] (11 MBps) [2024-10-30T17:28:18.543Z] Copying: 436/1024 [MB] (10 MBps) [2024-10-30T17:28:19.483Z] Copying: 446/1024 [MB] (10 MBps) [2024-10-30T17:28:20.424Z] Copying: 457/1024 [MB] (10 MBps) [2024-10-30T17:28:21.365Z] Copying: 468/1024 [MB] (10 MBps) [2024-10-30T17:28:22.303Z] Copying: 479/1024 [MB] (11 MBps) [2024-10-30T17:28:23.687Z] Copying: 490/1024 [MB] (10 MBps) [2024-10-30T17:28:24.258Z] Copying: 501/1024 [MB] (11 MBps) [2024-10-30T17:28:25.644Z] Copying: 512/1024 [MB] (10 MBps) [2024-10-30T17:28:26.263Z] Copying: 525/1024 [MB] (13 MBps) [2024-10-30T17:28:27.653Z] Copying: 535/1024 [MB] (10 MBps) [2024-10-30T17:28:28.595Z] Copying: 547/1024 [MB] (11 MBps) [2024-10-30T17:28:29.540Z] Copying: 557/1024 [MB] (10 MBps) [2024-10-30T17:28:30.489Z] Copying: 575/1024 [MB] (17 MBps) [2024-10-30T17:28:31.433Z] Copying: 590/1024 [MB] (15 MBps) [2024-10-30T17:28:32.379Z] Copying: 611/1024 [MB] (20 MBps) [2024-10-30T17:28:33.323Z] Copying: 634/1024 [MB] (22 MBps) [2024-10-30T17:28:34.266Z] Copying: 650/1024 [MB] (16 MBps) [2024-10-30T17:28:35.654Z] Copying: 668/1024 [MB] (18 MBps) [2024-10-30T17:28:36.599Z] Copying: 688/1024 [MB] (20 MBps) [2024-10-30T17:28:37.543Z] Copying: 712/1024 [MB] (23 MBps) [2024-10-30T17:28:38.489Z] Copying: 734/1024 [MB] (22 MBps) [2024-10-30T17:28:39.434Z] Copying: 756/1024 [MB] (21 MBps) [2024-10-30T17:28:40.376Z] Copying: 777/1024 [MB] (21 MBps) [2024-10-30T17:28:41.320Z] Copying: 793/1024 [MB] (16 MBps) [2024-10-30T17:28:42.265Z] Copying: 806/1024 [MB] (12 MBps) [2024-10-30T17:28:43.653Z] Copying: 817/1024 [MB] (10 MBps) [2024-10-30T17:28:44.596Z] Copying: 828/1024 [MB] (10 MBps) [2024-10-30T17:28:45.540Z] Copying: 839/1024 [MB] (10 MBps) [2024-10-30T17:28:46.484Z] Copying: 849/1024 [MB] (10 MBps) [2024-10-30T17:28:47.428Z] Copying: 860/1024 [MB] (10 MBps) [2024-10-30T17:28:48.370Z] Copying: 871/1024 [MB] (10 MBps) [2024-10-30T17:28:49.314Z] Copying: 887/1024 [MB] (15 MBps) [2024-10-30T17:28:50.258Z] Copying: 898/1024 [MB] (11 MBps) [2024-10-30T17:28:51.646Z] Copying: 909/1024 [MB] (10 MBps) [2024-10-30T17:28:52.592Z] Copying: 919/1024 [MB] (10 MBps) [2024-10-30T17:28:53.536Z] Copying: 933/1024 [MB] (13 MBps) [2024-10-30T17:28:54.486Z] Copying: 950/1024 [MB] (17 MBps) [2024-10-30T17:28:55.484Z] Copying: 966/1024 [MB] (15 MBps) [2024-10-30T17:28:56.427Z] Copying: 976/1024 [MB] (10 MBps) [2024-10-30T17:28:57.368Z] Copying: 987/1024 [MB] (10 MBps) [2024-10-30T17:28:58.313Z] Copying: 999/1024 [MB] (11 MBps) [2024-10-30T17:28:59.259Z] Copying: 1014/1024 [MB] (15 MBps) [2024-10-30T17:28:59.259Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-10-30 17:28:59.212743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.278 [2024-10-30 17:28:59.212847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:16.278 [2024-10-30 17:28:59.212876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:16.278 [2024-10-30 17:28:59.212894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.278 [2024-10-30 17:28:59.212936] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:16.278 [2024-10-30 17:28:59.218817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.278 [2024-10-30 17:28:59.218866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:16.278 [2024-10-30 17:28:59.218878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.850 ms 00:25:16.278 [2024-10-30 17:28:59.218951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.278 [2024-10-30 17:28:59.219185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.278 [2024-10-30 17:28:59.219210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:16.278 [2024-10-30 17:28:59.219221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:25:16.278 [2024-10-30 17:28:59.219229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.278 [2024-10-30 17:28:59.222678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.278 [2024-10-30 17:28:59.222707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:16.278 [2024-10-30 17:28:59.222718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.435 ms 00:25:16.278 [2024-10-30 17:28:59.222727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.278 [2024-10-30 17:28:59.228975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.278 [2024-10-30 17:28:59.229022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:16.278 [2024-10-30 17:28:59.229033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.226 ms 00:25:16.278 [2024-10-30 17:28:59.229041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.278 [2024-10-30 17:28:59.255688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.278 [2024-10-30 17:28:59.255742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:16.278 [2024-10-30 17:28:59.255755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.583 ms 00:25:16.278 [2024-10-30 17:28:59.255763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.542 [2024-10-30 17:28:59.272466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.542 [2024-10-30 17:28:59.272518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:16.542 [2024-10-30 17:28:59.272531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.656 ms 00:25:16.542 [2024-10-30 17:28:59.272539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.542 [2024-10-30 17:28:59.277704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.542 [2024-10-30 17:28:59.277753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:16.542 [2024-10-30 17:28:59.277771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.113 ms 00:25:16.542 [2024-10-30 17:28:59.277779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.542 [2024-10-30 17:28:59.303492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.542 [2024-10-30 17:28:59.303542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:16.542 [2024-10-30 17:28:59.303553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.697 ms 00:25:16.542 [2024-10-30 17:28:59.303561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.542 [2024-10-30 17:28:59.328597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.542 [2024-10-30 17:28:59.328656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:16.542 [2024-10-30 17:28:59.328666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.993 ms 00:25:16.542 [2024-10-30 17:28:59.328674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.542 [2024-10-30 17:28:59.353525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.542 [2024-10-30 17:28:59.353570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:16.542 [2024-10-30 17:28:59.353581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.806 ms 00:25:16.542 [2024-10-30 17:28:59.353588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.542 [2024-10-30 17:28:59.378263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.542 [2024-10-30 17:28:59.378315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:16.542 [2024-10-30 17:28:59.378327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.604 ms 00:25:16.542 [2024-10-30 17:28:59.378334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.542 [2024-10-30 17:28:59.378377] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:16.542 [2024-10-30 17:28:59.378393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:16.542 [2024-10-30 17:28:59.378412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:16.542 [2024-10-30 17:28:59.378421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:16.542 [2024-10-30 17:28:59.378892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.378900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.378908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.378915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.378923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.378931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.378938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.378946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.378954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.378962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.378970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.378978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.378985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.378993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:16.543 [2024-10-30 17:28:59.379221] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:16.543 [2024-10-30 17:28:59.379231] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 616d42d0-a4d9-43e9-b1e2-61151db08656 00:25:16.543 [2024-10-30 17:28:59.379242] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:16.543 [2024-10-30 17:28:59.379250] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:16.543 [2024-10-30 17:28:59.379258] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:16.543 [2024-10-30 17:28:59.379267] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:16.543 [2024-10-30 17:28:59.379275] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:16.543 [2024-10-30 17:28:59.379284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:16.543 [2024-10-30 17:28:59.379298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:16.543 [2024-10-30 17:28:59.379305] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:16.543 [2024-10-30 17:28:59.379312] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:16.543 [2024-10-30 17:28:59.379320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.543 [2024-10-30 17:28:59.379328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:16.543 [2024-10-30 17:28:59.379337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:25:16.543 [2024-10-30 17:28:59.379345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.543 [2024-10-30 17:28:59.392913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.543 [2024-10-30 17:28:59.392960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:16.543 [2024-10-30 17:28:59.392971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.549 ms 00:25:16.543 [2024-10-30 17:28:59.392979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.543 [2024-10-30 17:28:59.393405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.543 [2024-10-30 17:28:59.393428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:16.543 [2024-10-30 17:28:59.393438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:25:16.543 [2024-10-30 17:28:59.393453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.543 [2024-10-30 17:28:59.429874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.543 [2024-10-30 17:28:59.429927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:16.543 [2024-10-30 17:28:59.429940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.543 [2024-10-30 17:28:59.429950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.543 [2024-10-30 17:28:59.430020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.543 [2024-10-30 17:28:59.430030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:16.543 [2024-10-30 17:28:59.430040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.543 [2024-10-30 17:28:59.430056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.543 [2024-10-30 17:28:59.430136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.543 [2024-10-30 17:28:59.430148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:16.543 [2024-10-30 17:28:59.430158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.543 [2024-10-30 17:28:59.430167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.543 [2024-10-30 17:28:59.430185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.543 [2024-10-30 17:28:59.430194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:16.543 [2024-10-30 17:28:59.430221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.543 [2024-10-30 17:28:59.430230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.543 [2024-10-30 17:28:59.514139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.543 [2024-10-30 17:28:59.514227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:16.543 [2024-10-30 17:28:59.514243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.543 [2024-10-30 17:28:59.514251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.805 [2024-10-30 17:28:59.582671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.805 [2024-10-30 17:28:59.582731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:16.805 [2024-10-30 17:28:59.582744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.806 [2024-10-30 17:28:59.582759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.806 [2024-10-30 17:28:59.582819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.806 [2024-10-30 17:28:59.582829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:16.806 [2024-10-30 17:28:59.582839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.806 [2024-10-30 17:28:59.582847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.806 [2024-10-30 17:28:59.582901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.806 [2024-10-30 17:28:59.582911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:16.806 [2024-10-30 17:28:59.582920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.806 [2024-10-30 17:28:59.582928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.806 [2024-10-30 17:28:59.583029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.806 [2024-10-30 17:28:59.583040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:16.806 [2024-10-30 17:28:59.583049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.806 [2024-10-30 17:28:59.583056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.806 [2024-10-30 17:28:59.583088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.806 [2024-10-30 17:28:59.583098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:16.806 [2024-10-30 17:28:59.583107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.806 [2024-10-30 17:28:59.583115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.806 [2024-10-30 17:28:59.583158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.806 [2024-10-30 17:28:59.583169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:16.806 [2024-10-30 17:28:59.583177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.806 [2024-10-30 17:28:59.583186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.806 [2024-10-30 17:28:59.583256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.806 [2024-10-30 17:28:59.583267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:16.806 [2024-10-30 17:28:59.583276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.806 [2024-10-30 17:28:59.583285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.806 [2024-10-30 17:28:59.583423] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 370.661 ms, result 0 00:25:17.375 00:25:17.375 00:25:17.375 17:29:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:19.921 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:19.921 17:29:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:19.921 17:29:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:19.921 17:29:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:19.921 17:29:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:19.921 17:29:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:19.921 17:29:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:19.921 17:29:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:19.921 Process with pid 76855 is not found 00:25:19.921 17:29:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 76855 00:25:19.921 17:29:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # '[' -z 76855 ']' 00:25:19.921 17:29:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@956 -- # kill -0 76855 00:25:19.921 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (76855) - No such process 00:25:19.921 17:29:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@979 -- # echo 'Process with pid 76855 is not found' 00:25:19.921 17:29:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:19.921 17:29:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:19.921 17:29:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:19.921 Remove shared memory files 00:25:19.921 17:29:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:19.921 17:29:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:19.921 17:29:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:25:20.184 17:29:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:20.184 17:29:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:20.184 00:25:20.184 real 4m9.712s 00:25:20.184 user 4m32.016s 00:25:20.184 sys 0m25.089s 00:25:20.184 17:29:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:20.184 17:29:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:20.184 ************************************ 00:25:20.184 END TEST ftl_dirty_shutdown 00:25:20.184 ************************************ 00:25:20.184 17:29:02 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:20.184 17:29:02 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:25:20.184 17:29:02 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:20.184 17:29:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:20.184 ************************************ 00:25:20.184 START TEST ftl_upgrade_shutdown 00:25:20.184 ************************************ 00:25:20.184 17:29:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:20.184 * Looking for test storage... 00:25:20.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:20.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.184 --rc genhtml_branch_coverage=1 00:25:20.184 --rc genhtml_function_coverage=1 00:25:20.184 --rc genhtml_legend=1 00:25:20.184 --rc geninfo_all_blocks=1 00:25:20.184 --rc geninfo_unexecuted_blocks=1 00:25:20.184 00:25:20.184 ' 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:20.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.184 --rc genhtml_branch_coverage=1 00:25:20.184 --rc genhtml_function_coverage=1 00:25:20.184 --rc genhtml_legend=1 00:25:20.184 --rc geninfo_all_blocks=1 00:25:20.184 --rc geninfo_unexecuted_blocks=1 00:25:20.184 00:25:20.184 ' 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:20.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.184 --rc genhtml_branch_coverage=1 00:25:20.184 --rc genhtml_function_coverage=1 00:25:20.184 --rc genhtml_legend=1 00:25:20.184 --rc geninfo_all_blocks=1 00:25:20.184 --rc geninfo_unexecuted_blocks=1 00:25:20.184 00:25:20.184 ' 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:20.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.184 --rc genhtml_branch_coverage=1 00:25:20.184 --rc genhtml_function_coverage=1 00:25:20.184 --rc genhtml_legend=1 00:25:20.184 --rc geninfo_all_blocks=1 00:25:20.184 --rc geninfo_unexecuted_blocks=1 00:25:20.184 00:25:20.184 ' 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=79563 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 79563 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 79563 ']' 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:20.184 17:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.185 17:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:20.185 17:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:20.185 17:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:20.445 [2024-10-30 17:29:03.241618] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:25:20.445 [2024-10-30 17:29:03.242404] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79563 ] 00:25:20.445 [2024-10-30 17:29:03.409927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.705 [2024-10-30 17:29:03.517080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.277 17:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:21.277 17:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:25:21.277 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:21.277 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:21.277 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:25:21.277 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:21.277 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:21.277 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:21.277 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:25:21.277 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:21.277 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:21.277 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:21.277 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:25:21.278 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:21.278 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:21.278 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:21.278 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:21.278 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:25:21.278 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:25:21.278 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:21.278 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:25:21.278 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:21.278 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:25:21.539 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:21.539 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:21.539 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:21.539 17:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=basen1 00:25:21.539 17:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:25:21.539 17:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:25:21.539 17:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:25:21.539 17:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:21.800 17:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:25:21.800 { 00:25:21.800 "name": "basen1", 00:25:21.800 "aliases": [ 00:25:21.800 "14d36f83-7056-4cf2-a1c0-97b15dd08388" 00:25:21.800 ], 00:25:21.800 "product_name": "NVMe disk", 00:25:21.800 "block_size": 4096, 00:25:21.800 "num_blocks": 1310720, 00:25:21.800 "uuid": "14d36f83-7056-4cf2-a1c0-97b15dd08388", 00:25:21.800 "numa_id": -1, 00:25:21.800 "assigned_rate_limits": { 00:25:21.800 "rw_ios_per_sec": 0, 00:25:21.800 "rw_mbytes_per_sec": 0, 00:25:21.800 "r_mbytes_per_sec": 0, 00:25:21.800 "w_mbytes_per_sec": 0 00:25:21.800 }, 00:25:21.800 "claimed": true, 00:25:21.800 "claim_type": "read_many_write_one", 00:25:21.800 "zoned": false, 00:25:21.800 "supported_io_types": { 00:25:21.800 "read": true, 00:25:21.800 "write": true, 00:25:21.800 "unmap": true, 00:25:21.800 "flush": true, 00:25:21.800 "reset": true, 00:25:21.800 "nvme_admin": true, 00:25:21.800 "nvme_io": true, 00:25:21.800 "nvme_io_md": false, 00:25:21.800 "write_zeroes": true, 00:25:21.800 "zcopy": false, 00:25:21.800 "get_zone_info": false, 00:25:21.800 "zone_management": false, 00:25:21.801 "zone_append": false, 00:25:21.801 "compare": true, 00:25:21.801 "compare_and_write": false, 00:25:21.801 "abort": true, 00:25:21.801 "seek_hole": false, 00:25:21.801 "seek_data": false, 00:25:21.801 "copy": true, 00:25:21.801 "nvme_iov_md": false 00:25:21.801 }, 00:25:21.801 "driver_specific": { 00:25:21.801 "nvme": [ 00:25:21.801 { 00:25:21.801 "pci_address": "0000:00:11.0", 00:25:21.801 "trid": { 00:25:21.801 "trtype": "PCIe", 00:25:21.801 "traddr": "0000:00:11.0" 00:25:21.801 }, 00:25:21.801 "ctrlr_data": { 00:25:21.801 "cntlid": 0, 00:25:21.801 "vendor_id": "0x1b36", 00:25:21.801 "model_number": "QEMU NVMe Ctrl", 00:25:21.801 "serial_number": "12341", 00:25:21.801 "firmware_revision": "8.0.0", 00:25:21.801 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:21.801 "oacs": { 00:25:21.801 "security": 0, 00:25:21.801 "format": 1, 00:25:21.801 "firmware": 0, 00:25:21.801 "ns_manage": 1 00:25:21.801 }, 00:25:21.801 "multi_ctrlr": false, 00:25:21.801 "ana_reporting": false 00:25:21.801 }, 00:25:21.801 "vs": { 00:25:21.801 "nvme_version": "1.4" 00:25:21.801 }, 00:25:21.801 "ns_data": { 00:25:21.801 "id": 1, 00:25:21.801 "can_share": false 00:25:21.801 } 00:25:21.801 } 00:25:21.801 ], 00:25:21.801 "mp_policy": "active_passive" 00:25:21.801 } 00:25:21.801 } 00:25:21.801 ]' 00:25:21.801 17:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:25:21.801 17:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:25:21.801 17:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:25:21.801 17:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:25:21.801 17:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:25:21.801 17:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:25:22.063 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:22.063 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:22.063 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:22.063 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:22.063 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:22.063 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=fd1a2574-da88-4343-8696-180827f642a3 00:25:22.063 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:22.063 17:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fd1a2574-da88-4343-8696-180827f642a3 00:25:22.324 17:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:25:22.586 17:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=3a43a14d-db21-4a05-aa8c-9ac544b8c5fa 00:25:22.586 17:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 3a43a14d-db21-4a05-aa8c-9ac544b8c5fa 00:25:22.848 17:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=582d5554-bab4-4231-889a-b1d19493934b 00:25:22.848 17:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 582d5554-bab4-4231-889a-b1d19493934b ]] 00:25:22.848 17:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 582d5554-bab4-4231-889a-b1d19493934b 5120 00:25:22.848 17:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:25:22.848 17:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:22.848 17:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=582d5554-bab4-4231-889a-b1d19493934b 00:25:22.848 17:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:25:22.848 17:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 582d5554-bab4-4231-889a-b1d19493934b 00:25:22.848 17:29:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=582d5554-bab4-4231-889a-b1d19493934b 00:25:22.848 17:29:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:25:22.848 17:29:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:25:22.848 17:29:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:25:22.848 17:29:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 582d5554-bab4-4231-889a-b1d19493934b 00:25:23.109 17:29:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:25:23.109 { 00:25:23.109 "name": "582d5554-bab4-4231-889a-b1d19493934b", 00:25:23.109 "aliases": [ 00:25:23.109 "lvs/basen1p0" 00:25:23.109 ], 00:25:23.109 "product_name": "Logical Volume", 00:25:23.109 "block_size": 4096, 00:25:23.109 "num_blocks": 5242880, 00:25:23.109 "uuid": "582d5554-bab4-4231-889a-b1d19493934b", 00:25:23.109 "assigned_rate_limits": { 00:25:23.109 "rw_ios_per_sec": 0, 00:25:23.109 "rw_mbytes_per_sec": 0, 00:25:23.109 "r_mbytes_per_sec": 0, 00:25:23.109 "w_mbytes_per_sec": 0 00:25:23.109 }, 00:25:23.109 "claimed": false, 00:25:23.109 "zoned": false, 00:25:23.109 "supported_io_types": { 00:25:23.109 "read": true, 00:25:23.109 "write": true, 00:25:23.109 "unmap": true, 00:25:23.109 "flush": false, 00:25:23.109 "reset": true, 00:25:23.109 "nvme_admin": false, 00:25:23.109 "nvme_io": false, 00:25:23.109 "nvme_io_md": false, 00:25:23.109 "write_zeroes": true, 00:25:23.109 "zcopy": false, 00:25:23.109 "get_zone_info": false, 00:25:23.109 "zone_management": false, 00:25:23.109 "zone_append": false, 00:25:23.109 "compare": false, 00:25:23.109 "compare_and_write": false, 00:25:23.109 "abort": false, 00:25:23.109 "seek_hole": true, 00:25:23.109 "seek_data": true, 00:25:23.109 "copy": false, 00:25:23.109 "nvme_iov_md": false 00:25:23.109 }, 00:25:23.109 "driver_specific": { 00:25:23.109 "lvol": { 00:25:23.109 "lvol_store_uuid": "3a43a14d-db21-4a05-aa8c-9ac544b8c5fa", 00:25:23.109 "base_bdev": "basen1", 00:25:23.109 "thin_provision": true, 00:25:23.109 "num_allocated_clusters": 0, 00:25:23.109 "snapshot": false, 00:25:23.109 "clone": false, 00:25:23.110 "esnap_clone": false 00:25:23.110 } 00:25:23.110 } 00:25:23.110 } 00:25:23.110 ]' 00:25:23.110 17:29:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:25:23.110 17:29:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:25:23.110 17:29:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:25:23.110 17:29:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=5242880 00:25:23.110 17:29:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=20480 00:25:23.110 17:29:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 20480 00:25:23.110 17:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:25:23.110 17:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:23.110 17:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:25:23.370 17:29:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:25:23.370 17:29:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:25:23.370 17:29:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:25:23.630 17:29:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:25:23.630 17:29:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:25:23.630 17:29:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 582d5554-bab4-4231-889a-b1d19493934b -c cachen1p0 --l2p_dram_limit 2 00:25:23.892 [2024-10-30 17:29:06.665979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.892 [2024-10-30 17:29:06.666015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:23.892 [2024-10-30 17:29:06.666027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:23.892 [2024-10-30 17:29:06.666034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.892 [2024-10-30 17:29:06.666079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.892 [2024-10-30 17:29:06.666086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:23.892 [2024-10-30 17:29:06.666093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:25:23.892 [2024-10-30 17:29:06.666099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.892 [2024-10-30 17:29:06.666116] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:23.892 [2024-10-30 17:29:06.666683] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:23.892 [2024-10-30 17:29:06.666700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.892 [2024-10-30 17:29:06.666706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:23.892 [2024-10-30 17:29:06.666714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.586 ms 00:25:23.892 [2024-10-30 17:29:06.666719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.892 [2024-10-30 17:29:06.666746] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 79468eb4-47cf-4986-9686-5f4d51d7ff2f 00:25:23.892 [2024-10-30 17:29:06.667682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.892 [2024-10-30 17:29:06.667700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:25:23.892 [2024-10-30 17:29:06.667708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:25:23.892 [2024-10-30 17:29:06.667715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.892 [2024-10-30 17:29:06.672319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.892 [2024-10-30 17:29:06.672343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:23.892 [2024-10-30 17:29:06.672350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.548 ms 00:25:23.892 [2024-10-30 17:29:06.672359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.892 [2024-10-30 17:29:06.672389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.892 [2024-10-30 17:29:06.672397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:23.892 [2024-10-30 17:29:06.672403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:25:23.892 [2024-10-30 17:29:06.672411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.892 [2024-10-30 17:29:06.672439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.892 [2024-10-30 17:29:06.672447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:23.892 [2024-10-30 17:29:06.672453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:23.892 [2024-10-30 17:29:06.672462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.892 [2024-10-30 17:29:06.672480] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:23.892 [2024-10-30 17:29:06.675348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.892 [2024-10-30 17:29:06.675370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:23.892 [2024-10-30 17:29:06.675379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.873 ms 00:25:23.892 [2024-10-30 17:29:06.675387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.892 [2024-10-30 17:29:06.675407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.892 [2024-10-30 17:29:06.675414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:23.892 [2024-10-30 17:29:06.675421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:23.892 [2024-10-30 17:29:06.675427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.892 [2024-10-30 17:29:06.675452] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:25:23.892 [2024-10-30 17:29:06.675557] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:23.892 [2024-10-30 17:29:06.675570] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:23.892 [2024-10-30 17:29:06.675579] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:25:23.892 [2024-10-30 17:29:06.675587] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:23.892 [2024-10-30 17:29:06.675594] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:23.892 [2024-10-30 17:29:06.675601] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:23.892 [2024-10-30 17:29:06.675607] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:23.892 [2024-10-30 17:29:06.675614] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:23.892 [2024-10-30 17:29:06.675619] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:23.892 [2024-10-30 17:29:06.675628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.892 [2024-10-30 17:29:06.675634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:23.892 [2024-10-30 17:29:06.675641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.178 ms 00:25:23.892 [2024-10-30 17:29:06.675646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.892 [2024-10-30 17:29:06.675711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.892 [2024-10-30 17:29:06.675717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:23.892 [2024-10-30 17:29:06.675725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:25:23.892 [2024-10-30 17:29:06.675735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.892 [2024-10-30 17:29:06.675809] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:23.892 [2024-10-30 17:29:06.675817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:23.892 [2024-10-30 17:29:06.675825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:23.892 [2024-10-30 17:29:06.675830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.892 [2024-10-30 17:29:06.675837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:23.892 [2024-10-30 17:29:06.675842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:23.892 [2024-10-30 17:29:06.675849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:23.892 [2024-10-30 17:29:06.675854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:23.892 [2024-10-30 17:29:06.675860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:23.892 [2024-10-30 17:29:06.675865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.892 [2024-10-30 17:29:06.675872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:23.892 [2024-10-30 17:29:06.675878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:23.892 [2024-10-30 17:29:06.675884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.892 [2024-10-30 17:29:06.675889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:23.892 [2024-10-30 17:29:06.675895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:23.892 [2024-10-30 17:29:06.675900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.892 [2024-10-30 17:29:06.675908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:23.892 [2024-10-30 17:29:06.675914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:23.892 [2024-10-30 17:29:06.675920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.892 [2024-10-30 17:29:06.675925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:23.892 [2024-10-30 17:29:06.675932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:23.892 [2024-10-30 17:29:06.675937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:23.892 [2024-10-30 17:29:06.675944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:23.892 [2024-10-30 17:29:06.675949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:23.892 [2024-10-30 17:29:06.675955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:23.892 [2024-10-30 17:29:06.675960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:23.892 [2024-10-30 17:29:06.675966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:23.892 [2024-10-30 17:29:06.675971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:23.892 [2024-10-30 17:29:06.675977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:23.892 [2024-10-30 17:29:06.675982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:23.892 [2024-10-30 17:29:06.675988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:23.892 [2024-10-30 17:29:06.675993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:23.892 [2024-10-30 17:29:06.676001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:23.892 [2024-10-30 17:29:06.676005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.892 [2024-10-30 17:29:06.676012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:23.892 [2024-10-30 17:29:06.676017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:23.892 [2024-10-30 17:29:06.676023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.893 [2024-10-30 17:29:06.676027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:23.893 [2024-10-30 17:29:06.676034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:23.893 [2024-10-30 17:29:06.676039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.893 [2024-10-30 17:29:06.676045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:23.893 [2024-10-30 17:29:06.676050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:23.893 [2024-10-30 17:29:06.676055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.893 [2024-10-30 17:29:06.676060] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:23.893 [2024-10-30 17:29:06.676067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:23.893 [2024-10-30 17:29:06.676072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:23.893 [2024-10-30 17:29:06.676079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.893 [2024-10-30 17:29:06.676085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:23.893 [2024-10-30 17:29:06.676093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:23.893 [2024-10-30 17:29:06.676099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:23.893 [2024-10-30 17:29:06.676106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:23.893 [2024-10-30 17:29:06.676111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:23.893 [2024-10-30 17:29:06.676117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:23.893 [2024-10-30 17:29:06.676125] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:23.893 [2024-10-30 17:29:06.676133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:23.893 [2024-10-30 17:29:06.676139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:23.893 [2024-10-30 17:29:06.676146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:23.893 [2024-10-30 17:29:06.676151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:23.893 [2024-10-30 17:29:06.676158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:23.893 [2024-10-30 17:29:06.676163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:23.893 [2024-10-30 17:29:06.676170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:23.893 [2024-10-30 17:29:06.676176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:23.893 [2024-10-30 17:29:06.676182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:23.893 [2024-10-30 17:29:06.676187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:23.893 [2024-10-30 17:29:06.676195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:23.893 [2024-10-30 17:29:06.676215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:23.893 [2024-10-30 17:29:06.676222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:23.893 [2024-10-30 17:29:06.676228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:23.893 [2024-10-30 17:29:06.676235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:23.893 [2024-10-30 17:29:06.676240] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:23.893 [2024-10-30 17:29:06.676248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:23.893 [2024-10-30 17:29:06.676257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:23.893 [2024-10-30 17:29:06.676263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:23.893 [2024-10-30 17:29:06.676269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:23.893 [2024-10-30 17:29:06.676276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:23.893 [2024-10-30 17:29:06.676282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.893 [2024-10-30 17:29:06.676289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:23.893 [2024-10-30 17:29:06.676295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.527 ms 00:25:23.893 [2024-10-30 17:29:06.676302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.893 [2024-10-30 17:29:06.676330] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:25:23.893 [2024-10-30 17:29:06.676339] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:25:28.103 [2024-10-30 17:29:10.957600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.104 [2024-10-30 17:29:10.957691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:28.104 [2024-10-30 17:29:10.957710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4281.251 ms 00:25:28.104 [2024-10-30 17:29:10.957721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.104 [2024-10-30 17:29:10.990185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.104 [2024-10-30 17:29:10.990264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:28.104 [2024-10-30 17:29:10.990278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.195 ms 00:25:28.104 [2024-10-30 17:29:10.990289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.104 [2024-10-30 17:29:10.990373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.104 [2024-10-30 17:29:10.990386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:28.104 [2024-10-30 17:29:10.990396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:25:28.104 [2024-10-30 17:29:10.990410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.104 [2024-10-30 17:29:11.026133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.104 [2024-10-30 17:29:11.026185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:28.104 [2024-10-30 17:29:11.026208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.673 ms 00:25:28.104 [2024-10-30 17:29:11.026221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.104 [2024-10-30 17:29:11.026256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.104 [2024-10-30 17:29:11.026267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:28.104 [2024-10-30 17:29:11.026276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:28.104 [2024-10-30 17:29:11.026290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.104 [2024-10-30 17:29:11.026884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.104 [2024-10-30 17:29:11.026928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:28.104 [2024-10-30 17:29:11.026939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.540 ms 00:25:28.104 [2024-10-30 17:29:11.026950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.104 [2024-10-30 17:29:11.027005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.104 [2024-10-30 17:29:11.027018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:28.104 [2024-10-30 17:29:11.027027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:25:28.104 [2024-10-30 17:29:11.027040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.104 [2024-10-30 17:29:11.044689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.104 [2024-10-30 17:29:11.044732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:28.104 [2024-10-30 17:29:11.044743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.626 ms 00:25:28.104 [2024-10-30 17:29:11.044756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.104 [2024-10-30 17:29:11.058123] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:28.104 [2024-10-30 17:29:11.059531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.104 [2024-10-30 17:29:11.059569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:28.104 [2024-10-30 17:29:11.059583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.683 ms 00:25:28.104 [2024-10-30 17:29:11.059591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.366 [2024-10-30 17:29:11.107384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.366 [2024-10-30 17:29:11.107443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:25:28.366 [2024-10-30 17:29:11.107461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.753 ms 00:25:28.366 [2024-10-30 17:29:11.107470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.366 [2024-10-30 17:29:11.107583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.366 [2024-10-30 17:29:11.107594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:28.366 [2024-10-30 17:29:11.107610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:25:28.366 [2024-10-30 17:29:11.107621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.366 [2024-10-30 17:29:11.133796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.366 [2024-10-30 17:29:11.133855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:25:28.366 [2024-10-30 17:29:11.133872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.111 ms 00:25:28.366 [2024-10-30 17:29:11.133880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.366 [2024-10-30 17:29:11.159420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.366 [2024-10-30 17:29:11.159466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:25:28.366 [2024-10-30 17:29:11.159481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.479 ms 00:25:28.366 [2024-10-30 17:29:11.159488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.366 [2024-10-30 17:29:11.160091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.366 [2024-10-30 17:29:11.160112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:28.366 [2024-10-30 17:29:11.160124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.551 ms 00:25:28.366 [2024-10-30 17:29:11.160132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.366 [2024-10-30 17:29:11.249887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.366 [2024-10-30 17:29:11.249938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:25:28.366 [2024-10-30 17:29:11.249959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 89.703 ms 00:25:28.366 [2024-10-30 17:29:11.249968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.366 [2024-10-30 17:29:11.277112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.366 [2024-10-30 17:29:11.277161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:25:28.366 [2024-10-30 17:29:11.277188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.053 ms 00:25:28.366 [2024-10-30 17:29:11.277196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.366 [2024-10-30 17:29:11.303288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.366 [2024-10-30 17:29:11.303334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:25:28.366 [2024-10-30 17:29:11.303348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.016 ms 00:25:28.366 [2024-10-30 17:29:11.303355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.366 [2024-10-30 17:29:11.328893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.366 [2024-10-30 17:29:11.328956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:28.366 [2024-10-30 17:29:11.328972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.480 ms 00:25:28.366 [2024-10-30 17:29:11.328980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.366 [2024-10-30 17:29:11.329040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.366 [2024-10-30 17:29:11.329050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:28.366 [2024-10-30 17:29:11.329065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:28.366 [2024-10-30 17:29:11.329073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.366 [2024-10-30 17:29:11.329170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.366 [2024-10-30 17:29:11.329180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:28.366 [2024-10-30 17:29:11.329191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:25:28.366 [2024-10-30 17:29:11.329216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.366 [2024-10-30 17:29:11.330803] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4664.304 ms, result 0 00:25:28.366 { 00:25:28.366 "name": "ftl", 00:25:28.366 "uuid": "79468eb4-47cf-4986-9686-5f4d51d7ff2f" 00:25:28.366 } 00:25:28.628 17:29:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:25:28.628 [2024-10-30 17:29:11.545526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.628 17:29:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:25:28.889 17:29:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:25:29.151 [2024-10-30 17:29:11.978048] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:29.151 17:29:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:25:29.411 [2024-10-30 17:29:12.183679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:29.411 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:29.672 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:25:29.672 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:25:29.672 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:25:29.672 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:25:29.672 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:29.673 Fill FTL, iteration 1 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=79701 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 79701 /var/tmp/spdk.tgt.sock 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 79701 ']' 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:25:29.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:29.673 17:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:29.934 [2024-10-30 17:29:12.653592] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:25:29.934 [2024-10-30 17:29:12.653731] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79701 ] 00:25:29.934 [2024-10-30 17:29:12.814738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.196 [2024-10-30 17:29:12.960989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.770 17:29:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:30.770 17:29:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:25:30.770 17:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:25:31.032 ftln1 00:25:31.032 17:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:25:31.032 17:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:25:31.294 17:29:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:25:31.294 17:29:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 79701 00:25:31.294 17:29:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 79701 ']' 00:25:31.294 17:29:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 79701 00:25:31.294 17:29:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:25:31.294 17:29:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:31.294 17:29:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79701 00:25:31.294 17:29:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:31.294 killing process with pid 79701 00:25:31.294 17:29:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:31.294 17:29:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79701' 00:25:31.294 17:29:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 79701 00:25:31.294 17:29:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 79701 00:25:32.678 17:29:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:25:32.678 17:29:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:32.678 [2024-10-30 17:29:15.612432] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:25:32.679 [2024-10-30 17:29:15.612537] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79738 ] 00:25:32.938 [2024-10-30 17:29:15.768143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.938 [2024-10-30 17:29:15.854953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.316  [2024-10-30T17:29:18.233Z] Copying: 230/1024 [MB] (230 MBps) [2024-10-30T17:29:19.610Z] Copying: 465/1024 [MB] (235 MBps) [2024-10-30T17:29:20.175Z] Copying: 709/1024 [MB] (244 MBps) [2024-10-30T17:29:20.741Z] Copying: 949/1024 [MB] (240 MBps) [2024-10-30T17:29:21.308Z] Copying: 1024/1024 [MB] (average 237 MBps) 00:25:38.327 00:25:38.327 Calculate MD5 checksum, iteration 1 00:25:38.327 17:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:25:38.327 17:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:25:38.327 17:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:38.327 17:29:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:38.327 17:29:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:38.327 17:29:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:38.327 17:29:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:38.327 17:29:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:38.327 [2024-10-30 17:29:21.161338] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:25:38.327 [2024-10-30 17:29:21.161463] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79802 ] 00:25:38.587 [2024-10-30 17:29:21.320004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.587 [2024-10-30 17:29:21.422634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.962  [2024-10-30T17:29:23.548Z] Copying: 651/1024 [MB] (651 MBps) [2024-10-30T17:29:23.836Z] Copying: 1024/1024 [MB] (average 642 MBps) 00:25:40.855 00:25:41.114 17:29:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:25:41.114 17:29:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:43.025 17:29:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:43.025 Fill FTL, iteration 2 00:25:43.025 17:29:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=4a65fa3bd8cbe3c75b996957c0867227 00:25:43.025 17:29:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:43.025 17:29:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:43.025 17:29:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:25:43.025 17:29:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:43.025 17:29:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:43.025 17:29:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:43.025 17:29:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:43.025 17:29:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:43.025 17:29:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:43.283 [2024-10-30 17:29:26.068467] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:25:43.283 [2024-10-30 17:29:26.068581] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79858 ] 00:25:43.283 [2024-10-30 17:29:26.223993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.541 [2024-10-30 17:29:26.311308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.918  [2024-10-30T17:29:28.835Z] Copying: 241/1024 [MB] (241 MBps) [2024-10-30T17:29:29.770Z] Copying: 485/1024 [MB] (244 MBps) [2024-10-30T17:29:30.703Z] Copying: 720/1024 [MB] (235 MBps) [2024-10-30T17:29:30.962Z] Copying: 953/1024 [MB] (233 MBps) [2024-10-30T17:29:31.897Z] Copying: 1024/1024 [MB] (average 238 MBps) 00:25:48.916 00:25:48.916 Calculate MD5 checksum, iteration 2 00:25:48.916 17:29:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:25:48.916 17:29:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:25:48.916 17:29:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:48.916 17:29:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:48.916 17:29:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:48.916 17:29:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:48.916 17:29:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:48.916 17:29:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:48.916 [2024-10-30 17:29:31.639544] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:25:48.916 [2024-10-30 17:29:31.639660] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79918 ] 00:25:48.916 [2024-10-30 17:29:31.795806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.916 [2024-10-30 17:29:31.880271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.819  [2024-10-30T17:29:34.060Z] Copying: 642/1024 [MB] (642 MBps) [2024-10-30T17:29:34.998Z] Copying: 1024/1024 [MB] (average 617 MBps) 00:25:52.017 00:25:52.017 17:29:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:25:52.017 17:29:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:54.565 17:29:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:54.565 17:29:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=415c40194967f17094f1b68e90ef43f7 00:25:54.565 17:29:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:54.565 17:29:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:54.565 17:29:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:54.565 [2024-10-30 17:29:37.168712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.565 [2024-10-30 17:29:37.168834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:54.565 [2024-10-30 17:29:37.168850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:54.565 [2024-10-30 17:29:37.168857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.565 [2024-10-30 17:29:37.168881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.565 [2024-10-30 17:29:37.168887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:54.565 [2024-10-30 17:29:37.168894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:54.565 [2024-10-30 17:29:37.168900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.565 [2024-10-30 17:29:37.168918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.565 [2024-10-30 17:29:37.168924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:54.565 [2024-10-30 17:29:37.168931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:54.565 [2024-10-30 17:29:37.168936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.565 [2024-10-30 17:29:37.168985] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.263 ms, result 0 00:25:54.565 true 00:25:54.565 17:29:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:54.565 { 00:25:54.565 "name": "ftl", 00:25:54.565 "properties": [ 00:25:54.565 { 00:25:54.565 "name": "superblock_version", 00:25:54.565 "value": 5, 00:25:54.565 "read-only": true 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "name": "base_device", 00:25:54.565 "bands": [ 00:25:54.565 { 00:25:54.565 "id": 0, 00:25:54.565 "state": "FREE", 00:25:54.565 "validity": 0.0 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 1, 00:25:54.565 "state": "FREE", 00:25:54.565 "validity": 0.0 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 2, 00:25:54.565 "state": "FREE", 00:25:54.565 "validity": 0.0 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 3, 00:25:54.565 "state": "FREE", 00:25:54.565 "validity": 0.0 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 4, 00:25:54.565 "state": "FREE", 00:25:54.565 "validity": 0.0 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 5, 00:25:54.565 "state": "FREE", 00:25:54.565 "validity": 0.0 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 6, 00:25:54.565 "state": "FREE", 00:25:54.565 "validity": 0.0 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 7, 00:25:54.565 "state": "FREE", 00:25:54.565 "validity": 0.0 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 8, 00:25:54.565 "state": "FREE", 00:25:54.565 "validity": 0.0 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 9, 00:25:54.565 "state": "FREE", 00:25:54.565 "validity": 0.0 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 10, 00:25:54.565 "state": "FREE", 00:25:54.565 "validity": 0.0 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 11, 00:25:54.565 "state": "FREE", 00:25:54.565 "validity": 0.0 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 12, 00:25:54.565 "state": "FREE", 00:25:54.565 "validity": 0.0 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 13, 00:25:54.565 "state": "FREE", 00:25:54.565 "validity": 0.0 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 14, 00:25:54.565 "state": "FREE", 00:25:54.565 "validity": 0.0 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 15, 00:25:54.565 "state": "FREE", 00:25:54.565 "validity": 0.0 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 16, 00:25:54.565 "state": "FREE", 00:25:54.565 "validity": 0.0 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 17, 00:25:54.565 "state": "FREE", 00:25:54.565 "validity": 0.0 00:25:54.565 } 00:25:54.565 ], 00:25:54.565 "read-only": true 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "name": "cache_device", 00:25:54.565 "type": "bdev", 00:25:54.565 "chunks": [ 00:25:54.565 { 00:25:54.565 "id": 0, 00:25:54.565 "state": "INACTIVE", 00:25:54.565 "utilization": 0.0 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 1, 00:25:54.565 "state": "CLOSED", 00:25:54.565 "utilization": 1.0 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 2, 00:25:54.565 "state": "CLOSED", 00:25:54.565 "utilization": 1.0 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 3, 00:25:54.565 "state": "OPEN", 00:25:54.565 "utilization": 0.001953125 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "id": 4, 00:25:54.565 "state": "OPEN", 00:25:54.565 "utilization": 0.0 00:25:54.565 } 00:25:54.565 ], 00:25:54.565 "read-only": true 00:25:54.565 }, 00:25:54.565 { 00:25:54.565 "name": "verbose_mode", 00:25:54.565 "value": true, 00:25:54.565 "unit": "", 00:25:54.566 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:54.566 }, 00:25:54.566 { 00:25:54.566 "name": "prep_upgrade_on_shutdown", 00:25:54.566 "value": false, 00:25:54.566 "unit": "", 00:25:54.566 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:54.566 } 00:25:54.566 ] 00:25:54.566 } 00:25:54.566 17:29:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:25:54.827 [2024-10-30 17:29:37.569037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.827 [2024-10-30 17:29:37.569069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:54.827 [2024-10-30 17:29:37.569078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:54.827 [2024-10-30 17:29:37.569084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.827 [2024-10-30 17:29:37.569100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.827 [2024-10-30 17:29:37.569106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:54.827 [2024-10-30 17:29:37.569112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:54.827 [2024-10-30 17:29:37.569118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.827 [2024-10-30 17:29:37.569132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:54.827 [2024-10-30 17:29:37.569138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:54.827 [2024-10-30 17:29:37.569144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:54.827 [2024-10-30 17:29:37.569149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:54.827 [2024-10-30 17:29:37.569190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.146 ms, result 0 00:25:54.827 true 00:25:54.827 17:29:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:25:54.827 17:29:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:54.827 17:29:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:54.827 17:29:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:25:54.827 17:29:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:25:54.827 17:29:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:55.089 [2024-10-30 17:29:37.969345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.089 [2024-10-30 17:29:37.969374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:55.089 [2024-10-30 17:29:37.969383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:55.089 [2024-10-30 17:29:37.969388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.089 [2024-10-30 17:29:37.969404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.089 [2024-10-30 17:29:37.969410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:55.089 [2024-10-30 17:29:37.969416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:55.089 [2024-10-30 17:29:37.969421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.089 [2024-10-30 17:29:37.969435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.089 [2024-10-30 17:29:37.969441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:55.089 [2024-10-30 17:29:37.969447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:55.089 [2024-10-30 17:29:37.969452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.089 [2024-10-30 17:29:37.969493] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.138 ms, result 0 00:25:55.089 true 00:25:55.089 17:29:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:55.351 { 00:25:55.351 "name": "ftl", 00:25:55.351 "properties": [ 00:25:55.351 { 00:25:55.351 "name": "superblock_version", 00:25:55.351 "value": 5, 00:25:55.351 "read-only": true 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "name": "base_device", 00:25:55.351 "bands": [ 00:25:55.351 { 00:25:55.351 "id": 0, 00:25:55.351 "state": "FREE", 00:25:55.351 "validity": 0.0 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 1, 00:25:55.351 "state": "FREE", 00:25:55.351 "validity": 0.0 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 2, 00:25:55.351 "state": "FREE", 00:25:55.351 "validity": 0.0 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 3, 00:25:55.351 "state": "FREE", 00:25:55.351 "validity": 0.0 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 4, 00:25:55.351 "state": "FREE", 00:25:55.351 "validity": 0.0 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 5, 00:25:55.351 "state": "FREE", 00:25:55.351 "validity": 0.0 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 6, 00:25:55.351 "state": "FREE", 00:25:55.351 "validity": 0.0 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 7, 00:25:55.351 "state": "FREE", 00:25:55.351 "validity": 0.0 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 8, 00:25:55.351 "state": "FREE", 00:25:55.351 "validity": 0.0 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 9, 00:25:55.351 "state": "FREE", 00:25:55.351 "validity": 0.0 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 10, 00:25:55.351 "state": "FREE", 00:25:55.351 "validity": 0.0 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 11, 00:25:55.351 "state": "FREE", 00:25:55.351 "validity": 0.0 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 12, 00:25:55.351 "state": "FREE", 00:25:55.351 "validity": 0.0 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 13, 00:25:55.351 "state": "FREE", 00:25:55.351 "validity": 0.0 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 14, 00:25:55.351 "state": "FREE", 00:25:55.351 "validity": 0.0 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 15, 00:25:55.351 "state": "FREE", 00:25:55.351 "validity": 0.0 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 16, 00:25:55.351 "state": "FREE", 00:25:55.351 "validity": 0.0 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 17, 00:25:55.351 "state": "FREE", 00:25:55.351 "validity": 0.0 00:25:55.351 } 00:25:55.351 ], 00:25:55.351 "read-only": true 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "name": "cache_device", 00:25:55.351 "type": "bdev", 00:25:55.351 "chunks": [ 00:25:55.351 { 00:25:55.351 "id": 0, 00:25:55.351 "state": "INACTIVE", 00:25:55.351 "utilization": 0.0 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 1, 00:25:55.351 "state": "CLOSED", 00:25:55.351 "utilization": 1.0 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 2, 00:25:55.351 "state": "CLOSED", 00:25:55.351 "utilization": 1.0 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 3, 00:25:55.351 "state": "OPEN", 00:25:55.351 "utilization": 0.001953125 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "id": 4, 00:25:55.351 "state": "OPEN", 00:25:55.351 "utilization": 0.0 00:25:55.351 } 00:25:55.351 ], 00:25:55.351 "read-only": true 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "name": "verbose_mode", 00:25:55.351 "value": true, 00:25:55.351 "unit": "", 00:25:55.351 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:55.351 }, 00:25:55.351 { 00:25:55.351 "name": "prep_upgrade_on_shutdown", 00:25:55.351 "value": true, 00:25:55.351 "unit": "", 00:25:55.351 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:55.351 } 00:25:55.351 ] 00:25:55.351 } 00:25:55.351 17:29:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:25:55.351 17:29:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 79563 ]] 00:25:55.351 17:29:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 79563 00:25:55.351 17:29:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 79563 ']' 00:25:55.351 17:29:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 79563 00:25:55.351 17:29:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:25:55.351 17:29:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:55.351 17:29:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79563 00:25:55.351 killing process with pid 79563 00:25:55.351 17:29:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:55.351 17:29:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:55.351 17:29:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79563' 00:25:55.351 17:29:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 79563 00:25:55.351 17:29:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 79563 00:25:55.923 [2024-10-30 17:29:38.761255] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:55.923 [2024-10-30 17:29:38.772517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.923 [2024-10-30 17:29:38.772548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:55.923 [2024-10-30 17:29:38.772558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:55.923 [2024-10-30 17:29:38.772565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.923 [2024-10-30 17:29:38.772581] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:55.923 [2024-10-30 17:29:38.774639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.923 [2024-10-30 17:29:38.774662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:55.923 [2024-10-30 17:29:38.774670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.047 ms 00:25:55.923 [2024-10-30 17:29:38.774676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.061 [2024-10-30 17:29:46.605520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.061 [2024-10-30 17:29:46.605662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:04.061 [2024-10-30 17:29:46.605679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7830.800 ms 00:26:04.061 [2024-10-30 17:29:46.605686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.061 [2024-10-30 17:29:46.606623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.061 [2024-10-30 17:29:46.606649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:04.061 [2024-10-30 17:29:46.606657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.923 ms 00:26:04.061 [2024-10-30 17:29:46.606663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.061 [2024-10-30 17:29:46.607605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.061 [2024-10-30 17:29:46.607624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:04.061 [2024-10-30 17:29:46.607632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.855 ms 00:26:04.061 [2024-10-30 17:29:46.607639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.061 [2024-10-30 17:29:46.615226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.061 [2024-10-30 17:29:46.615251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:04.061 [2024-10-30 17:29:46.615259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.558 ms 00:26:04.061 [2024-10-30 17:29:46.615265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.061 [2024-10-30 17:29:46.620619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.061 [2024-10-30 17:29:46.620721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:04.061 [2024-10-30 17:29:46.620733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.329 ms 00:26:04.061 [2024-10-30 17:29:46.620739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.061 [2024-10-30 17:29:46.620800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.061 [2024-10-30 17:29:46.620808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:04.061 [2024-10-30 17:29:46.620815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:26:04.061 [2024-10-30 17:29:46.620821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.061 [2024-10-30 17:29:46.627891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.061 [2024-10-30 17:29:46.627983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:26:04.061 [2024-10-30 17:29:46.627993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.055 ms 00:26:04.061 [2024-10-30 17:29:46.627999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.061 [2024-10-30 17:29:46.635210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.061 [2024-10-30 17:29:46.635296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:26:04.061 [2024-10-30 17:29:46.635343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.188 ms 00:26:04.061 [2024-10-30 17:29:46.635360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.061 [2024-10-30 17:29:46.642353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.061 [2024-10-30 17:29:46.642439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:04.061 [2024-10-30 17:29:46.642485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.964 ms 00:26:04.061 [2024-10-30 17:29:46.642501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.061 [2024-10-30 17:29:46.649499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.061 [2024-10-30 17:29:46.649586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:04.061 [2024-10-30 17:29:46.649628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.947 ms 00:26:04.061 [2024-10-30 17:29:46.649644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.061 [2024-10-30 17:29:46.649685] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:04.061 [2024-10-30 17:29:46.649791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:04.061 [2024-10-30 17:29:46.649825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:04.061 [2024-10-30 17:29:46.649854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:04.061 [2024-10-30 17:29:46.649877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:04.061 [2024-10-30 17:29:46.649898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:04.061 [2024-10-30 17:29:46.649920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:04.061 [2024-10-30 17:29:46.649963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:04.061 [2024-10-30 17:29:46.649987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:04.061 [2024-10-30 17:29:46.650009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:04.061 [2024-10-30 17:29:46.650030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:04.061 [2024-10-30 17:29:46.650077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:04.061 [2024-10-30 17:29:46.650100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:04.061 [2024-10-30 17:29:46.650121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:04.061 [2024-10-30 17:29:46.650144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:04.061 [2024-10-30 17:29:46.650185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:04.061 [2024-10-30 17:29:46.650220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:04.061 [2024-10-30 17:29:46.650268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:04.061 [2024-10-30 17:29:46.650290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:04.061 [2024-10-30 17:29:46.650330] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:04.061 [2024-10-30 17:29:46.650346] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 79468eb4-47cf-4986-9686-5f4d51d7ff2f 00:26:04.061 [2024-10-30 17:29:46.650369] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:04.061 [2024-10-30 17:29:46.650409] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:04.061 [2024-10-30 17:29:46.650425] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:04.061 [2024-10-30 17:29:46.650441] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:04.061 [2024-10-30 17:29:46.650455] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:04.061 [2024-10-30 17:29:46.650485] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:04.061 [2024-10-30 17:29:46.650502] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:04.061 [2024-10-30 17:29:46.650515] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:04.061 [2024-10-30 17:29:46.650528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:04.061 [2024-10-30 17:29:46.650613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.061 [2024-10-30 17:29:46.650633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:04.061 [2024-10-30 17:29:46.650650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.928 ms 00:26:04.061 [2024-10-30 17:29:46.650664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.061 [2024-10-30 17:29:46.660054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.061 [2024-10-30 17:29:46.660137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:04.061 [2024-10-30 17:29:46.660177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.361 ms 00:26:04.061 [2024-10-30 17:29:46.660193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.061 [2024-10-30 17:29:46.660491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.061 [2024-10-30 17:29:46.660553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:04.061 [2024-10-30 17:29:46.660594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.253 ms 00:26:04.061 [2024-10-30 17:29:46.660611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.061 [2024-10-30 17:29:46.693159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:04.061 [2024-10-30 17:29:46.693267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:04.061 [2024-10-30 17:29:46.693309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:04.061 [2024-10-30 17:29:46.693326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.061 [2024-10-30 17:29:46.693362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:04.061 [2024-10-30 17:29:46.693379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:04.061 [2024-10-30 17:29:46.693393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:04.061 [2024-10-30 17:29:46.693407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.061 [2024-10-30 17:29:46.693473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:04.061 [2024-10-30 17:29:46.693520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:04.061 [2024-10-30 17:29:46.693537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:04.061 [2024-10-30 17:29:46.693552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.061 [2024-10-30 17:29:46.693573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:04.061 [2024-10-30 17:29:46.693593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:04.061 [2024-10-30 17:29:46.693607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:04.061 [2024-10-30 17:29:46.693621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.061 [2024-10-30 17:29:46.751747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:04.061 [2024-10-30 17:29:46.751873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:04.062 [2024-10-30 17:29:46.751920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:04.062 [2024-10-30 17:29:46.751937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.062 [2024-10-30 17:29:46.799874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:04.062 [2024-10-30 17:29:46.800001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:04.062 [2024-10-30 17:29:46.800040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:04.062 [2024-10-30 17:29:46.800058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.062 [2024-10-30 17:29:46.800133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:04.062 [2024-10-30 17:29:46.800152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:04.062 [2024-10-30 17:29:46.800167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:04.062 [2024-10-30 17:29:46.800182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.062 [2024-10-30 17:29:46.800238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:04.062 [2024-10-30 17:29:46.800257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:04.062 [2024-10-30 17:29:46.800276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:04.062 [2024-10-30 17:29:46.800316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.062 [2024-10-30 17:29:46.800424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:04.062 [2024-10-30 17:29:46.800445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:04.062 [2024-10-30 17:29:46.800484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:04.062 [2024-10-30 17:29:46.800500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.062 [2024-10-30 17:29:46.800536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:04.062 [2024-10-30 17:29:46.800574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:04.062 [2024-10-30 17:29:46.800591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:04.062 [2024-10-30 17:29:46.800609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.062 [2024-10-30 17:29:46.800664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:04.062 [2024-10-30 17:29:46.800682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:04.062 [2024-10-30 17:29:46.800697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:04.062 [2024-10-30 17:29:46.800711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.062 [2024-10-30 17:29:46.800755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:04.062 [2024-10-30 17:29:46.800773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:04.062 [2024-10-30 17:29:46.800822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:04.062 [2024-10-30 17:29:46.800838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.062 [2024-10-30 17:29:46.800939] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8028.383 ms, result 0 00:26:05.977 17:29:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:05.977 17:29:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:05.977 17:29:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:05.977 17:29:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:05.977 17:29:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:05.978 17:29:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80109 00:26:05.978 17:29:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:05.978 17:29:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80109 00:26:05.978 17:29:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80109 ']' 00:26:05.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.978 17:29:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.978 17:29:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:05.978 17:29:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.978 17:29:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:05.978 17:29:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:05.978 17:29:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:05.978 [2024-10-30 17:29:48.688334] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:26:05.978 [2024-10-30 17:29:48.688451] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80109 ] 00:26:05.978 [2024-10-30 17:29:48.844400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.978 [2024-10-30 17:29:48.922894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.551 [2024-10-30 17:29:49.494945] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:06.551 [2024-10-30 17:29:49.494995] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:06.813 [2024-10-30 17:29:49.641674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.813 [2024-10-30 17:29:49.641708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:06.813 [2024-10-30 17:29:49.641719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:06.813 [2024-10-30 17:29:49.641725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.813 [2024-10-30 17:29:49.641765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.813 [2024-10-30 17:29:49.641773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:06.813 [2024-10-30 17:29:49.641779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:26:06.813 [2024-10-30 17:29:49.641785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.813 [2024-10-30 17:29:49.641815] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:06.813 [2024-10-30 17:29:49.642367] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:06.813 [2024-10-30 17:29:49.642428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.813 [2024-10-30 17:29:49.642435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:06.813 [2024-10-30 17:29:49.642442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.631 ms 00:26:06.813 [2024-10-30 17:29:49.642448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.813 [2024-10-30 17:29:49.643401] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:06.813 [2024-10-30 17:29:49.652967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.813 [2024-10-30 17:29:49.652995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:06.813 [2024-10-30 17:29:49.653003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.567 ms 00:26:06.813 [2024-10-30 17:29:49.653012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.813 [2024-10-30 17:29:49.653056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.813 [2024-10-30 17:29:49.653063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:06.813 [2024-10-30 17:29:49.653070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:06.813 [2024-10-30 17:29:49.653075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.813 [2024-10-30 17:29:49.657371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.813 [2024-10-30 17:29:49.657398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:06.813 [2024-10-30 17:29:49.657405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.254 ms 00:26:06.813 [2024-10-30 17:29:49.657411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.813 [2024-10-30 17:29:49.657452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.813 [2024-10-30 17:29:49.657458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:06.813 [2024-10-30 17:29:49.657465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:26:06.813 [2024-10-30 17:29:49.657470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.813 [2024-10-30 17:29:49.657511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.813 [2024-10-30 17:29:49.657518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:06.813 [2024-10-30 17:29:49.657524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:06.813 [2024-10-30 17:29:49.657532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.813 [2024-10-30 17:29:49.657547] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:06.813 [2024-10-30 17:29:49.660069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.813 [2024-10-30 17:29:49.660093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:06.813 [2024-10-30 17:29:49.660101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.525 ms 00:26:06.813 [2024-10-30 17:29:49.660109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.813 [2024-10-30 17:29:49.660133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.813 [2024-10-30 17:29:49.660139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:06.813 [2024-10-30 17:29:49.660145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:06.813 [2024-10-30 17:29:49.660151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.814 [2024-10-30 17:29:49.660166] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:06.814 [2024-10-30 17:29:49.660180] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:06.814 [2024-10-30 17:29:49.660216] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:06.814 [2024-10-30 17:29:49.660228] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:06.814 [2024-10-30 17:29:49.660314] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:06.814 [2024-10-30 17:29:49.660322] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:06.814 [2024-10-30 17:29:49.660330] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:06.814 [2024-10-30 17:29:49.660337] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:06.814 [2024-10-30 17:29:49.660344] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:06.814 [2024-10-30 17:29:49.660350] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:06.814 [2024-10-30 17:29:49.660357] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:06.814 [2024-10-30 17:29:49.660363] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:06.814 [2024-10-30 17:29:49.660368] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:06.814 [2024-10-30 17:29:49.660374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.814 [2024-10-30 17:29:49.660379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:06.814 [2024-10-30 17:29:49.660385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.209 ms 00:26:06.814 [2024-10-30 17:29:49.660390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.814 [2024-10-30 17:29:49.660455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.814 [2024-10-30 17:29:49.660461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:06.814 [2024-10-30 17:29:49.660467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:26:06.814 [2024-10-30 17:29:49.660475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.814 [2024-10-30 17:29:49.660549] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:06.814 [2024-10-30 17:29:49.660556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:06.814 [2024-10-30 17:29:49.660563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:06.814 [2024-10-30 17:29:49.660569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:06.814 [2024-10-30 17:29:49.660574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:06.814 [2024-10-30 17:29:49.660579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:06.814 [2024-10-30 17:29:49.660584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:06.814 [2024-10-30 17:29:49.660589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:06.814 [2024-10-30 17:29:49.660595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:06.814 [2024-10-30 17:29:49.660600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:06.814 [2024-10-30 17:29:49.660605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:06.814 [2024-10-30 17:29:49.660610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:06.814 [2024-10-30 17:29:49.660614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:06.814 [2024-10-30 17:29:49.660620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:06.814 [2024-10-30 17:29:49.660627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:06.814 [2024-10-30 17:29:49.660632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:06.814 [2024-10-30 17:29:49.660637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:06.814 [2024-10-30 17:29:49.660642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:06.814 [2024-10-30 17:29:49.660647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:06.814 [2024-10-30 17:29:49.660652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:06.814 [2024-10-30 17:29:49.660657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:06.814 [2024-10-30 17:29:49.660662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:06.814 [2024-10-30 17:29:49.660667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:06.814 [2024-10-30 17:29:49.660672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:06.814 [2024-10-30 17:29:49.660677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:06.814 [2024-10-30 17:29:49.660686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:06.814 [2024-10-30 17:29:49.660691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:06.814 [2024-10-30 17:29:49.660696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:06.814 [2024-10-30 17:29:49.660701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:06.814 [2024-10-30 17:29:49.660706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:06.814 [2024-10-30 17:29:49.660711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:06.814 [2024-10-30 17:29:49.660716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:06.814 [2024-10-30 17:29:49.660720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:06.814 [2024-10-30 17:29:49.660725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:06.814 [2024-10-30 17:29:49.660730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:06.814 [2024-10-30 17:29:49.660735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:06.814 [2024-10-30 17:29:49.660740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:06.814 [2024-10-30 17:29:49.660744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:06.814 [2024-10-30 17:29:49.660749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:06.814 [2024-10-30 17:29:49.660754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:06.814 [2024-10-30 17:29:49.660759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:06.814 [2024-10-30 17:29:49.660763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:06.814 [2024-10-30 17:29:49.660768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:06.814 [2024-10-30 17:29:49.660773] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:06.814 [2024-10-30 17:29:49.660778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:06.814 [2024-10-30 17:29:49.660784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:06.814 [2024-10-30 17:29:49.660790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:06.814 [2024-10-30 17:29:49.660796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:06.814 [2024-10-30 17:29:49.660801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:06.814 [2024-10-30 17:29:49.660806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:06.814 [2024-10-30 17:29:49.660812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:06.814 [2024-10-30 17:29:49.660816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:06.814 [2024-10-30 17:29:49.660821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:06.814 [2024-10-30 17:29:49.660828] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:06.814 [2024-10-30 17:29:49.660836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:06.814 [2024-10-30 17:29:49.660842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:06.814 [2024-10-30 17:29:49.660848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:06.814 [2024-10-30 17:29:49.660853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:06.814 [2024-10-30 17:29:49.660858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:06.814 [2024-10-30 17:29:49.660864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:06.815 [2024-10-30 17:29:49.660869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:06.815 [2024-10-30 17:29:49.660874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:06.815 [2024-10-30 17:29:49.660879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:06.815 [2024-10-30 17:29:49.660885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:06.815 [2024-10-30 17:29:49.660890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:06.815 [2024-10-30 17:29:49.660895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:06.815 [2024-10-30 17:29:49.660900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:06.815 [2024-10-30 17:29:49.660906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:06.815 [2024-10-30 17:29:49.660911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:06.815 [2024-10-30 17:29:49.660916] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:06.815 [2024-10-30 17:29:49.660922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:06.815 [2024-10-30 17:29:49.660928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:06.815 [2024-10-30 17:29:49.660933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:06.815 [2024-10-30 17:29:49.660939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:06.815 [2024-10-30 17:29:49.660944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:06.815 [2024-10-30 17:29:49.660949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.815 [2024-10-30 17:29:49.660961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:06.815 [2024-10-30 17:29:49.660967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.453 ms 00:26:06.815 [2024-10-30 17:29:49.660973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.815 [2024-10-30 17:29:49.661004] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:06.815 [2024-10-30 17:29:49.661011] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:11.024 [2024-10-30 17:29:53.685110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.024 [2024-10-30 17:29:53.685190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:11.024 [2024-10-30 17:29:53.685224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4024.089 ms 00:26:11.024 [2024-10-30 17:29:53.685234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.024 [2024-10-30 17:29:53.716302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.024 [2024-10-30 17:29:53.716359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:11.024 [2024-10-30 17:29:53.716374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.819 ms 00:26:11.024 [2024-10-30 17:29:53.716383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.024 [2024-10-30 17:29:53.716476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.024 [2024-10-30 17:29:53.716487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:11.024 [2024-10-30 17:29:53.716502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:11.024 [2024-10-30 17:29:53.716511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.024 [2024-10-30 17:29:53.751866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.024 [2024-10-30 17:29:53.752086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:11.024 [2024-10-30 17:29:53.752107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.317 ms 00:26:11.024 [2024-10-30 17:29:53.752116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.024 [2024-10-30 17:29:53.752159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.024 [2024-10-30 17:29:53.752168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:11.024 [2024-10-30 17:29:53.752178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:11.024 [2024-10-30 17:29:53.752185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.024 [2024-10-30 17:29:53.752755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.024 [2024-10-30 17:29:53.752789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:11.024 [2024-10-30 17:29:53.752800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.476 ms 00:26:11.024 [2024-10-30 17:29:53.752808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.024 [2024-10-30 17:29:53.752866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.024 [2024-10-30 17:29:53.752877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:11.024 [2024-10-30 17:29:53.752885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:26:11.024 [2024-10-30 17:29:53.752893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.024 [2024-10-30 17:29:53.770254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.024 [2024-10-30 17:29:53.770428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:11.024 [2024-10-30 17:29:53.770447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.338 ms 00:26:11.024 [2024-10-30 17:29:53.770455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.024 [2024-10-30 17:29:53.784813] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:11.024 [2024-10-30 17:29:53.784863] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:11.024 [2024-10-30 17:29:53.784877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.024 [2024-10-30 17:29:53.784885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:11.024 [2024-10-30 17:29:53.784895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.302 ms 00:26:11.024 [2024-10-30 17:29:53.784902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.024 [2024-10-30 17:29:53.799664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.024 [2024-10-30 17:29:53.799713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:11.024 [2024-10-30 17:29:53.799726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.709 ms 00:26:11.024 [2024-10-30 17:29:53.799734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.024 [2024-10-30 17:29:53.812211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.024 [2024-10-30 17:29:53.812399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:11.024 [2024-10-30 17:29:53.812419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.420 ms 00:26:11.024 [2024-10-30 17:29:53.812427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.024 [2024-10-30 17:29:53.825023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.024 [2024-10-30 17:29:53.825069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:11.024 [2024-10-30 17:29:53.825081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.511 ms 00:26:11.024 [2024-10-30 17:29:53.825088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.024 [2024-10-30 17:29:53.825753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.024 [2024-10-30 17:29:53.825781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:11.024 [2024-10-30 17:29:53.825795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.527 ms 00:26:11.024 [2024-10-30 17:29:53.825817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.024 [2024-10-30 17:29:53.902707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.024 [2024-10-30 17:29:53.902773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:11.024 [2024-10-30 17:29:53.902789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 76.867 ms 00:26:11.024 [2024-10-30 17:29:53.902799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.024 [2024-10-30 17:29:53.913930] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:11.024 [2024-10-30 17:29:53.914915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.024 [2024-10-30 17:29:53.914962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:11.024 [2024-10-30 17:29:53.914973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.057 ms 00:26:11.024 [2024-10-30 17:29:53.914981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.024 [2024-10-30 17:29:53.915066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.025 [2024-10-30 17:29:53.915077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:11.025 [2024-10-30 17:29:53.915089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:11.025 [2024-10-30 17:29:53.915097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.025 [2024-10-30 17:29:53.915159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.025 [2024-10-30 17:29:53.915170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:11.025 [2024-10-30 17:29:53.915179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:11.025 [2024-10-30 17:29:53.915187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.025 [2024-10-30 17:29:53.915237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.025 [2024-10-30 17:29:53.915247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:11.025 [2024-10-30 17:29:53.915256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:11.025 [2024-10-30 17:29:53.915268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.025 [2024-10-30 17:29:53.915304] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:11.025 [2024-10-30 17:29:53.915314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.025 [2024-10-30 17:29:53.915323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:11.025 [2024-10-30 17:29:53.915331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:11.025 [2024-10-30 17:29:53.915339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.025 [2024-10-30 17:29:53.940480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.025 [2024-10-30 17:29:53.940672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:11.025 [2024-10-30 17:29:53.940701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.120 ms 00:26:11.025 [2024-10-30 17:29:53.940711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.025 [2024-10-30 17:29:53.940795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.025 [2024-10-30 17:29:53.940806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:11.025 [2024-10-30 17:29:53.940816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:26:11.025 [2024-10-30 17:29:53.940825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.025 [2024-10-30 17:29:53.942119] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4299.904 ms, result 0 00:26:11.025 [2024-10-30 17:29:53.957040] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:11.025 [2024-10-30 17:29:53.973068] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:11.025 [2024-10-30 17:29:53.981235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:11.971 17:29:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:11.971 17:29:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:26:11.971 17:29:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:11.971 17:29:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:11.971 17:29:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:11.971 [2024-10-30 17:29:54.837783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.971 [2024-10-30 17:29:54.837829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:11.971 [2024-10-30 17:29:54.837839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:11.971 [2024-10-30 17:29:54.837846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.971 [2024-10-30 17:29:54.837867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.971 [2024-10-30 17:29:54.837873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:11.971 [2024-10-30 17:29:54.837880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:11.971 [2024-10-30 17:29:54.837885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.971 [2024-10-30 17:29:54.837900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:11.971 [2024-10-30 17:29:54.837906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:11.971 [2024-10-30 17:29:54.837913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:11.971 [2024-10-30 17:29:54.837918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:11.971 [2024-10-30 17:29:54.837961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.170 ms, result 0 00:26:11.971 true 00:26:11.971 17:29:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:12.232 { 00:26:12.232 "name": "ftl", 00:26:12.232 "properties": [ 00:26:12.232 { 00:26:12.232 "name": "superblock_version", 00:26:12.232 "value": 5, 00:26:12.232 "read-only": true 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "name": "base_device", 00:26:12.232 "bands": [ 00:26:12.232 { 00:26:12.232 "id": 0, 00:26:12.232 "state": "CLOSED", 00:26:12.232 "validity": 1.0 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 1, 00:26:12.232 "state": "CLOSED", 00:26:12.232 "validity": 1.0 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 2, 00:26:12.232 "state": "CLOSED", 00:26:12.232 "validity": 0.007843137254901933 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 3, 00:26:12.232 "state": "FREE", 00:26:12.232 "validity": 0.0 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 4, 00:26:12.232 "state": "FREE", 00:26:12.232 "validity": 0.0 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 5, 00:26:12.232 "state": "FREE", 00:26:12.232 "validity": 0.0 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 6, 00:26:12.232 "state": "FREE", 00:26:12.232 "validity": 0.0 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 7, 00:26:12.232 "state": "FREE", 00:26:12.232 "validity": 0.0 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 8, 00:26:12.232 "state": "FREE", 00:26:12.232 "validity": 0.0 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 9, 00:26:12.232 "state": "FREE", 00:26:12.232 "validity": 0.0 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 10, 00:26:12.232 "state": "FREE", 00:26:12.232 "validity": 0.0 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 11, 00:26:12.232 "state": "FREE", 00:26:12.232 "validity": 0.0 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 12, 00:26:12.232 "state": "FREE", 00:26:12.232 "validity": 0.0 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 13, 00:26:12.232 "state": "FREE", 00:26:12.232 "validity": 0.0 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 14, 00:26:12.232 "state": "FREE", 00:26:12.232 "validity": 0.0 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 15, 00:26:12.232 "state": "FREE", 00:26:12.232 "validity": 0.0 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 16, 00:26:12.232 "state": "FREE", 00:26:12.232 "validity": 0.0 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 17, 00:26:12.232 "state": "FREE", 00:26:12.232 "validity": 0.0 00:26:12.232 } 00:26:12.232 ], 00:26:12.232 "read-only": true 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "name": "cache_device", 00:26:12.232 "type": "bdev", 00:26:12.232 "chunks": [ 00:26:12.232 { 00:26:12.232 "id": 0, 00:26:12.232 "state": "INACTIVE", 00:26:12.232 "utilization": 0.0 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 1, 00:26:12.232 "state": "OPEN", 00:26:12.232 "utilization": 0.0 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 2, 00:26:12.232 "state": "OPEN", 00:26:12.232 "utilization": 0.0 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 3, 00:26:12.232 "state": "FREE", 00:26:12.232 "utilization": 0.0 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "id": 4, 00:26:12.232 "state": "FREE", 00:26:12.232 "utilization": 0.0 00:26:12.232 } 00:26:12.232 ], 00:26:12.232 "read-only": true 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "name": "verbose_mode", 00:26:12.232 "value": true, 00:26:12.232 "unit": "", 00:26:12.232 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:12.232 }, 00:26:12.232 { 00:26:12.232 "name": "prep_upgrade_on_shutdown", 00:26:12.232 "value": false, 00:26:12.232 "unit": "", 00:26:12.232 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:12.232 } 00:26:12.232 ] 00:26:12.232 } 00:26:12.232 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:12.232 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:12.232 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:12.494 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:12.494 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:12.494 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:12.494 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:12.494 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:12.494 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:12.494 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:12.494 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:12.494 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:12.494 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:12.494 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:12.494 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:12.494 Validate MD5 checksum, iteration 1 00:26:12.494 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:12.494 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:12.494 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:12.494 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:12.494 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:12.494 17:29:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:12.756 [2024-10-30 17:29:55.520638] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:26:12.756 [2024-10-30 17:29:55.520749] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80204 ] 00:26:12.756 [2024-10-30 17:29:55.679441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.017 [2024-10-30 17:29:55.771803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.432  [2024-10-30T17:29:57.985Z] Copying: 577/1024 [MB] (577 MBps) [2024-10-30T17:29:59.420Z] Copying: 1024/1024 [MB] (average 609 MBps) 00:26:16.439 00:26:16.439 17:29:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:16.439 17:29:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:18.365 17:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:18.365 17:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4a65fa3bd8cbe3c75b996957c0867227 00:26:18.365 17:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4a65fa3bd8cbe3c75b996957c0867227 != \4\a\6\5\f\a\3\b\d\8\c\b\e\3\c\7\5\b\9\9\6\9\5\7\c\0\8\6\7\2\2\7 ]] 00:26:18.365 Validate MD5 checksum, iteration 2 00:26:18.365 17:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:18.365 17:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:18.365 17:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:18.365 17:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:18.365 17:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:18.365 17:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:18.365 17:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:18.365 17:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:18.365 17:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:18.365 [2024-10-30 17:30:01.060490] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:26:18.365 [2024-10-30 17:30:01.060607] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80266 ] 00:26:18.365 [2024-10-30 17:30:01.223821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.625 [2024-10-30 17:30:01.349831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.013  [2024-10-30T17:30:03.937Z] Copying: 492/1024 [MB] (492 MBps) [2024-10-30T17:30:03.937Z] Copying: 1007/1024 [MB] (515 MBps) [2024-10-30T17:30:04.877Z] Copying: 1024/1024 [MB] (average 503 MBps) 00:26:21.896 00:26:21.896 17:30:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:21.896 17:30:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=415c40194967f17094f1b68e90ef43f7 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 415c40194967f17094f1b68e90ef43f7 != \4\1\5\c\4\0\1\9\4\9\6\7\f\1\7\0\9\4\f\1\b\6\8\e\9\0\e\f\4\3\f\7 ]] 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 80109 ]] 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 80109 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80333 00:26:24.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80333 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80333 ']' 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:24.424 17:30:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:24.424 [2024-10-30 17:30:07.038332] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:26:24.424 [2024-10-30 17:30:07.038446] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80333 ] 00:26:24.424 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: 80109 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:26:24.424 [2024-10-30 17:30:07.193728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.424 [2024-10-30 17:30:07.284413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.990 [2024-10-30 17:30:07.908459] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:24.991 [2024-10-30 17:30:07.908510] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:25.252 [2024-10-30 17:30:08.056678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.252 [2024-10-30 17:30:08.056720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:25.252 [2024-10-30 17:30:08.056732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:25.252 [2024-10-30 17:30:08.056740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.252 [2024-10-30 17:30:08.056790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.252 [2024-10-30 17:30:08.056800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:25.252 [2024-10-30 17:30:08.056809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:26:25.252 [2024-10-30 17:30:08.056816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.252 [2024-10-30 17:30:08.056838] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:25.252 [2024-10-30 17:30:08.057530] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:25.252 [2024-10-30 17:30:08.057546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.252 [2024-10-30 17:30:08.057554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:25.252 [2024-10-30 17:30:08.057562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.716 ms 00:26:25.252 [2024-10-30 17:30:08.057569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.252 [2024-10-30 17:30:08.057843] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:25.252 [2024-10-30 17:30:08.074471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.252 [2024-10-30 17:30:08.074507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:25.252 [2024-10-30 17:30:08.074519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.627 ms 00:26:25.252 [2024-10-30 17:30:08.074528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.252 [2024-10-30 17:30:08.083358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.252 [2024-10-30 17:30:08.083386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:25.252 [2024-10-30 17:30:08.083398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:26:25.252 [2024-10-30 17:30:08.083406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.252 [2024-10-30 17:30:08.083715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.252 [2024-10-30 17:30:08.083726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:25.252 [2024-10-30 17:30:08.083735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.233 ms 00:26:25.252 [2024-10-30 17:30:08.083742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.252 [2024-10-30 17:30:08.083788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.252 [2024-10-30 17:30:08.083799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:25.252 [2024-10-30 17:30:08.083807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:26:25.252 [2024-10-30 17:30:08.083814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.252 [2024-10-30 17:30:08.083838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.252 [2024-10-30 17:30:08.083847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:25.252 [2024-10-30 17:30:08.083855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:25.252 [2024-10-30 17:30:08.083861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.252 [2024-10-30 17:30:08.083881] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:25.252 [2024-10-30 17:30:08.086821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.252 [2024-10-30 17:30:08.086845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:25.252 [2024-10-30 17:30:08.086854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.943 ms 00:26:25.252 [2024-10-30 17:30:08.086861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.252 [2024-10-30 17:30:08.086889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.252 [2024-10-30 17:30:08.086899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:25.252 [2024-10-30 17:30:08.086907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:25.252 [2024-10-30 17:30:08.086914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.252 [2024-10-30 17:30:08.086934] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:25.252 [2024-10-30 17:30:08.086951] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:25.252 [2024-10-30 17:30:08.086986] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:25.252 [2024-10-30 17:30:08.087001] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:25.252 [2024-10-30 17:30:08.087105] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:25.252 [2024-10-30 17:30:08.087115] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:25.252 [2024-10-30 17:30:08.087125] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:25.252 [2024-10-30 17:30:08.087135] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:25.252 [2024-10-30 17:30:08.087145] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:25.252 [2024-10-30 17:30:08.087153] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:25.252 [2024-10-30 17:30:08.087160] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:25.252 [2024-10-30 17:30:08.087167] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:25.252 [2024-10-30 17:30:08.087174] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:25.252 [2024-10-30 17:30:08.087181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.252 [2024-10-30 17:30:08.087188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:25.252 [2024-10-30 17:30:08.087209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.249 ms 00:26:25.252 [2024-10-30 17:30:08.087217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.252 [2024-10-30 17:30:08.087302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.252 [2024-10-30 17:30:08.087309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:25.252 [2024-10-30 17:30:08.087316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:26:25.252 [2024-10-30 17:30:08.087323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.252 [2024-10-30 17:30:08.087436] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:25.252 [2024-10-30 17:30:08.087446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:25.252 [2024-10-30 17:30:08.087454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:25.252 [2024-10-30 17:30:08.087465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.252 [2024-10-30 17:30:08.087473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:25.252 [2024-10-30 17:30:08.087479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:25.252 [2024-10-30 17:30:08.087486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:25.252 [2024-10-30 17:30:08.087493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:25.252 [2024-10-30 17:30:08.087500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:25.252 [2024-10-30 17:30:08.087507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.252 [2024-10-30 17:30:08.087514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:25.252 [2024-10-30 17:30:08.087521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:25.252 [2024-10-30 17:30:08.087528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.252 [2024-10-30 17:30:08.087535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:25.252 [2024-10-30 17:30:08.087542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:25.252 [2024-10-30 17:30:08.087549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.252 [2024-10-30 17:30:08.087558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:25.252 [2024-10-30 17:30:08.087564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:25.252 [2024-10-30 17:30:08.087570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.252 [2024-10-30 17:30:08.087577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:25.252 [2024-10-30 17:30:08.087583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:25.252 [2024-10-30 17:30:08.087590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:25.252 [2024-10-30 17:30:08.087596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:25.252 [2024-10-30 17:30:08.087608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:25.252 [2024-10-30 17:30:08.087614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:25.252 [2024-10-30 17:30:08.087621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:25.252 [2024-10-30 17:30:08.087628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:25.252 [2024-10-30 17:30:08.087634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:25.252 [2024-10-30 17:30:08.087640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:25.252 [2024-10-30 17:30:08.087647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:25.252 [2024-10-30 17:30:08.087653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:25.252 [2024-10-30 17:30:08.087660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:25.252 [2024-10-30 17:30:08.087666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:25.252 [2024-10-30 17:30:08.087673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.252 [2024-10-30 17:30:08.087679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:25.252 [2024-10-30 17:30:08.087686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:25.252 [2024-10-30 17:30:08.087692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.252 [2024-10-30 17:30:08.087699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:25.252 [2024-10-30 17:30:08.087705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:25.252 [2024-10-30 17:30:08.087712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.252 [2024-10-30 17:30:08.087718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:25.252 [2024-10-30 17:30:08.087725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:25.253 [2024-10-30 17:30:08.087731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.253 [2024-10-30 17:30:08.087738] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:25.253 [2024-10-30 17:30:08.087746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:25.253 [2024-10-30 17:30:08.087752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:25.253 [2024-10-30 17:30:08.087760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.253 [2024-10-30 17:30:08.087767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:25.253 [2024-10-30 17:30:08.087774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:25.253 [2024-10-30 17:30:08.087781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:25.253 [2024-10-30 17:30:08.087787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:25.253 [2024-10-30 17:30:08.087794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:25.253 [2024-10-30 17:30:08.087800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:25.253 [2024-10-30 17:30:08.087808] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:25.253 [2024-10-30 17:30:08.087817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:25.253 [2024-10-30 17:30:08.087825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:25.253 [2024-10-30 17:30:08.087833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:25.253 [2024-10-30 17:30:08.087840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:25.253 [2024-10-30 17:30:08.087847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:25.253 [2024-10-30 17:30:08.087854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:25.253 [2024-10-30 17:30:08.087861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:25.253 [2024-10-30 17:30:08.087868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:25.253 [2024-10-30 17:30:08.087875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:25.253 [2024-10-30 17:30:08.087882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:25.253 [2024-10-30 17:30:08.087889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:25.253 [2024-10-30 17:30:08.087895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:25.253 [2024-10-30 17:30:08.087902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:25.253 [2024-10-30 17:30:08.087909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:25.253 [2024-10-30 17:30:08.087917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:25.253 [2024-10-30 17:30:08.087924] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:25.253 [2024-10-30 17:30:08.087932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:25.253 [2024-10-30 17:30:08.087939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:25.253 [2024-10-30 17:30:08.087947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:25.253 [2024-10-30 17:30:08.087954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:25.253 [2024-10-30 17:30:08.087962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:25.253 [2024-10-30 17:30:08.087969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.253 [2024-10-30 17:30:08.087978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:25.253 [2024-10-30 17:30:08.087985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.601 ms 00:26:25.253 [2024-10-30 17:30:08.087992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.253 [2024-10-30 17:30:08.111948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.253 [2024-10-30 17:30:08.111978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:25.253 [2024-10-30 17:30:08.111988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.908 ms 00:26:25.253 [2024-10-30 17:30:08.111996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.253 [2024-10-30 17:30:08.112032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.253 [2024-10-30 17:30:08.112040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:25.253 [2024-10-30 17:30:08.112049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:25.253 [2024-10-30 17:30:08.112056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.253 [2024-10-30 17:30:08.142479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.253 [2024-10-30 17:30:08.142506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:25.253 [2024-10-30 17:30:08.142516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.374 ms 00:26:25.253 [2024-10-30 17:30:08.142524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.253 [2024-10-30 17:30:08.142549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.253 [2024-10-30 17:30:08.142557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:25.253 [2024-10-30 17:30:08.142565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:25.253 [2024-10-30 17:30:08.142572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.253 [2024-10-30 17:30:08.142662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.253 [2024-10-30 17:30:08.142671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:25.253 [2024-10-30 17:30:08.142679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:26:25.253 [2024-10-30 17:30:08.142686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.253 [2024-10-30 17:30:08.142726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.253 [2024-10-30 17:30:08.142733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:25.253 [2024-10-30 17:30:08.142741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:25.253 [2024-10-30 17:30:08.142748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.253 [2024-10-30 17:30:08.156911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.253 [2024-10-30 17:30:08.156939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:25.253 [2024-10-30 17:30:08.156948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.143 ms 00:26:25.253 [2024-10-30 17:30:08.156956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.253 [2024-10-30 17:30:08.157061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.253 [2024-10-30 17:30:08.157072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:26:25.253 [2024-10-30 17:30:08.157080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:25.253 [2024-10-30 17:30:08.157087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.253 [2024-10-30 17:30:08.185313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.253 [2024-10-30 17:30:08.185348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:26:25.253 [2024-10-30 17:30:08.185360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.207 ms 00:26:25.253 [2024-10-30 17:30:08.185368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.253 [2024-10-30 17:30:08.194589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.253 [2024-10-30 17:30:08.194614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:25.253 [2024-10-30 17:30:08.194623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.521 ms 00:26:25.253 [2024-10-30 17:30:08.194637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.514 [2024-10-30 17:30:08.250887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.514 [2024-10-30 17:30:08.251035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:25.514 [2024-10-30 17:30:08.251058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 56.197 ms 00:26:25.514 [2024-10-30 17:30:08.251067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.514 [2024-10-30 17:30:08.251184] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:26:25.514 [2024-10-30 17:30:08.251299] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:26:25.514 [2024-10-30 17:30:08.251388] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:26:25.514 [2024-10-30 17:30:08.251476] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:26:25.514 [2024-10-30 17:30:08.251485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.514 [2024-10-30 17:30:08.251493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:26:25.514 [2024-10-30 17:30:08.251502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.383 ms 00:26:25.514 [2024-10-30 17:30:08.251509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.514 [2024-10-30 17:30:08.251561] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:26:25.514 [2024-10-30 17:30:08.251572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.514 [2024-10-30 17:30:08.251580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:26:25.514 [2024-10-30 17:30:08.251591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:25.514 [2024-10-30 17:30:08.251599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.514 [2024-10-30 17:30:08.266862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.514 [2024-10-30 17:30:08.266896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:26:25.514 [2024-10-30 17:30:08.266910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.242 ms 00:26:25.514 [2024-10-30 17:30:08.266918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.514 [2024-10-30 17:30:08.275639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.514 [2024-10-30 17:30:08.275668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:26:25.514 [2024-10-30 17:30:08.275678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:25.514 [2024-10-30 17:30:08.275686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.514 [2024-10-30 17:30:08.275772] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:26:25.514 [2024-10-30 17:30:08.275909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.514 [2024-10-30 17:30:08.275922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:25.514 [2024-10-30 17:30:08.275930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.140 ms 00:26:25.514 [2024-10-30 17:30:08.275938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.457 [2024-10-30 17:30:09.175233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.457 [2024-10-30 17:30:09.175322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:26.457 [2024-10-30 17:30:09.175341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 898.450 ms 00:26:26.457 [2024-10-30 17:30:09.175351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.457 [2024-10-30 17:30:09.180336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.457 [2024-10-30 17:30:09.180528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:26.457 [2024-10-30 17:30:09.180550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.722 ms 00:26:26.457 [2024-10-30 17:30:09.180559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.457 [2024-10-30 17:30:09.181923] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:26:26.457 [2024-10-30 17:30:09.181989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.457 [2024-10-30 17:30:09.181999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:26.457 [2024-10-30 17:30:09.182010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.274 ms 00:26:26.457 [2024-10-30 17:30:09.182019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.457 [2024-10-30 17:30:09.182063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.457 [2024-10-30 17:30:09.182074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:26.457 [2024-10-30 17:30:09.182084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:26.457 [2024-10-30 17:30:09.182093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.457 [2024-10-30 17:30:09.182135] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 906.358 ms, result 0 00:26:26.457 [2024-10-30 17:30:09.182179] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:26:26.457 [2024-10-30 17:30:09.182330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.457 [2024-10-30 17:30:09.182346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:26.457 [2024-10-30 17:30:09.182356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.151 ms 00:26:26.457 [2024-10-30 17:30:09.182364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.402 [2024-10-30 17:30:10.082840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.402 [2024-10-30 17:30:10.082898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:27.402 [2024-10-30 17:30:10.082908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 899.266 ms 00:26:27.402 [2024-10-30 17:30:10.082915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.402 [2024-10-30 17:30:10.086447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.402 [2024-10-30 17:30:10.086476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:27.402 [2024-10-30 17:30:10.086484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.024 ms 00:26:27.402 [2024-10-30 17:30:10.086490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.402 [2024-10-30 17:30:10.086884] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:26:27.402 [2024-10-30 17:30:10.086905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.402 [2024-10-30 17:30:10.086912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:27.402 [2024-10-30 17:30:10.086919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.395 ms 00:26:27.402 [2024-10-30 17:30:10.086925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.402 [2024-10-30 17:30:10.086947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.402 [2024-10-30 17:30:10.086954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:27.402 [2024-10-30 17:30:10.086960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:27.402 [2024-10-30 17:30:10.086965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.402 [2024-10-30 17:30:10.087002] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 904.823 ms, result 0 00:26:27.402 [2024-10-30 17:30:10.087034] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:27.402 [2024-10-30 17:30:10.087042] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:27.402 [2024-10-30 17:30:10.087049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.402 [2024-10-30 17:30:10.087055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:26:27.402 [2024-10-30 17:30:10.087061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1811.295 ms 00:26:27.402 [2024-10-30 17:30:10.087066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.402 [2024-10-30 17:30:10.087089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.402 [2024-10-30 17:30:10.087096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:26:27.402 [2024-10-30 17:30:10.087104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:27.402 [2024-10-30 17:30:10.087110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.402 [2024-10-30 17:30:10.095883] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:27.402 [2024-10-30 17:30:10.096082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.402 [2024-10-30 17:30:10.096095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:27.402 [2024-10-30 17:30:10.096102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.959 ms 00:26:27.402 [2024-10-30 17:30:10.096108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.402 [2024-10-30 17:30:10.096670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.402 [2024-10-30 17:30:10.096686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:26:27.402 [2024-10-30 17:30:10.096693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.508 ms 00:26:27.402 [2024-10-30 17:30:10.096701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.402 [2024-10-30 17:30:10.098446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.403 [2024-10-30 17:30:10.098464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:26:27.403 [2024-10-30 17:30:10.098471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.731 ms 00:26:27.403 [2024-10-30 17:30:10.098477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.403 [2024-10-30 17:30:10.098506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.403 [2024-10-30 17:30:10.098513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:26:27.403 [2024-10-30 17:30:10.098520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:27.403 [2024-10-30 17:30:10.098525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.403 [2024-10-30 17:30:10.098607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.403 [2024-10-30 17:30:10.098614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:27.403 [2024-10-30 17:30:10.098620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:27.403 [2024-10-30 17:30:10.098626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.403 [2024-10-30 17:30:10.098643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.403 [2024-10-30 17:30:10.098650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:27.403 [2024-10-30 17:30:10.098656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:27.403 [2024-10-30 17:30:10.098661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.403 [2024-10-30 17:30:10.098682] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:27.403 [2024-10-30 17:30:10.098691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.403 [2024-10-30 17:30:10.098697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:27.403 [2024-10-30 17:30:10.098703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:27.403 [2024-10-30 17:30:10.098708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.403 [2024-10-30 17:30:10.098747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.403 [2024-10-30 17:30:10.098755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:27.403 [2024-10-30 17:30:10.098761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:26:27.403 [2024-10-30 17:30:10.098767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.403 [2024-10-30 17:30:10.099617] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2042.484 ms, result 0 00:26:27.403 [2024-10-30 17:30:10.112355] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.403 [2024-10-30 17:30:10.128347] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:27.403 [2024-10-30 17:30:10.136450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:27.403 17:30:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:27.403 17:30:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:26:27.403 17:30:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:27.403 Validate MD5 checksum, iteration 1 00:26:27.403 17:30:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:27.403 17:30:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:26:27.403 17:30:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:27.403 17:30:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:27.403 17:30:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:27.403 17:30:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:27.403 17:30:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:27.403 17:30:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:27.403 17:30:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:27.403 17:30:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:27.403 17:30:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:27.403 17:30:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:27.403 [2024-10-30 17:30:10.238900] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:26:27.403 [2024-10-30 17:30:10.239268] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80377 ] 00:26:27.664 [2024-10-30 17:30:10.399728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.664 [2024-10-30 17:30:10.497958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.050  [2024-10-30T17:30:12.975Z] Copying: 617/1024 [MB] (617 MBps) [2024-10-30T17:30:13.918Z] Copying: 1024/1024 [MB] (average 611 MBps) 00:26:30.937 00:26:30.937 17:30:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:30.937 17:30:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:32.845 17:30:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:32.845 Validate MD5 checksum, iteration 2 00:26:32.845 17:30:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4a65fa3bd8cbe3c75b996957c0867227 00:26:32.845 17:30:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4a65fa3bd8cbe3c75b996957c0867227 != \4\a\6\5\f\a\3\b\d\8\c\b\e\3\c\7\5\b\9\9\6\9\5\7\c\0\8\6\7\2\2\7 ]] 00:26:32.845 17:30:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:32.845 17:30:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:32.845 17:30:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:32.845 17:30:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:32.845 17:30:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:32.845 17:30:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:32.845 17:30:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:32.845 17:30:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:32.845 17:30:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:33.103 [2024-10-30 17:30:15.845386] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:26:33.103 [2024-10-30 17:30:15.845618] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80445 ] 00:26:33.103 [2024-10-30 17:30:15.995346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.103 [2024-10-30 17:30:16.069940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.014  [2024-10-30T17:30:18.256Z] Copying: 699/1024 [MB] (699 MBps) [2024-10-30T17:30:22.456Z] Copying: 1024/1024 [MB] (average 664 MBps) 00:26:39.475 00:26:39.475 17:30:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:39.475 17:30:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:41.389 17:30:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:41.389 17:30:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=415c40194967f17094f1b68e90ef43f7 00:26:41.389 17:30:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 415c40194967f17094f1b68e90ef43f7 != \4\1\5\c\4\0\1\9\4\9\6\7\f\1\7\0\9\4\f\1\b\6\8\e\9\0\e\f\4\3\f\7 ]] 00:26:41.389 17:30:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:41.389 17:30:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:41.389 17:30:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:26:41.389 17:30:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:26:41.389 17:30:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:26:41.389 17:30:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:41.650 17:30:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:26:41.650 17:30:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:26:41.650 17:30:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:26:41.650 17:30:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:26:41.650 17:30:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80333 ]] 00:26:41.650 17:30:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80333 00:26:41.650 17:30:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80333 ']' 00:26:41.650 17:30:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80333 00:26:41.650 17:30:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:26:41.650 17:30:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:41.650 17:30:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80333 00:26:41.650 killing process with pid 80333 00:26:41.650 17:30:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:41.650 17:30:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:41.650 17:30:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80333' 00:26:41.650 17:30:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80333 00:26:41.650 17:30:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80333 00:26:42.224 [2024-10-30 17:30:24.978526] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:42.224 [2024-10-30 17:30:24.989509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.224 [2024-10-30 17:30:24.989543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:42.224 [2024-10-30 17:30:24.989553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:42.224 [2024-10-30 17:30:24.989559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.224 [2024-10-30 17:30:24.989575] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:42.224 [2024-10-30 17:30:24.991674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.224 [2024-10-30 17:30:24.991699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:42.224 [2024-10-30 17:30:24.991707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.088 ms 00:26:42.224 [2024-10-30 17:30:24.991717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.224 [2024-10-30 17:30:24.991894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.224 [2024-10-30 17:30:24.991902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:42.224 [2024-10-30 17:30:24.991909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.160 ms 00:26:42.224 [2024-10-30 17:30:24.991914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.224 [2024-10-30 17:30:24.993056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.224 [2024-10-30 17:30:24.993183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:42.224 [2024-10-30 17:30:24.993196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.130 ms 00:26:42.224 [2024-10-30 17:30:24.993216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.224 [2024-10-30 17:30:24.994083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.224 [2024-10-30 17:30:24.994097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:42.224 [2024-10-30 17:30:24.994104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.837 ms 00:26:42.224 [2024-10-30 17:30:24.994109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.224 [2024-10-30 17:30:25.001702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.224 [2024-10-30 17:30:25.001730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:42.225 [2024-10-30 17:30:25.001738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.568 ms 00:26:42.225 [2024-10-30 17:30:25.001744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.225 [2024-10-30 17:30:25.005850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.225 [2024-10-30 17:30:25.005876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:42.225 [2024-10-30 17:30:25.005884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.076 ms 00:26:42.225 [2024-10-30 17:30:25.005891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.225 [2024-10-30 17:30:25.005948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.225 [2024-10-30 17:30:25.005956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:42.225 [2024-10-30 17:30:25.005963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:26:42.225 [2024-10-30 17:30:25.005968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.225 [2024-10-30 17:30:25.013277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.225 [2024-10-30 17:30:25.013302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:26:42.225 [2024-10-30 17:30:25.013309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.297 ms 00:26:42.225 [2024-10-30 17:30:25.013314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.225 [2024-10-30 17:30:25.020567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.225 [2024-10-30 17:30:25.020593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:26:42.225 [2024-10-30 17:30:25.020600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.229 ms 00:26:42.225 [2024-10-30 17:30:25.020605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.225 [2024-10-30 17:30:25.027606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.225 [2024-10-30 17:30:25.027711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:42.225 [2024-10-30 17:30:25.027722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.977 ms 00:26:42.225 [2024-10-30 17:30:25.027727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.225 [2024-10-30 17:30:25.034821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.225 [2024-10-30 17:30:25.034917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:42.225 [2024-10-30 17:30:25.034928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.049 ms 00:26:42.225 [2024-10-30 17:30:25.034934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.225 [2024-10-30 17:30:25.034956] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:42.225 [2024-10-30 17:30:25.034970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:42.225 [2024-10-30 17:30:25.034977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:42.225 [2024-10-30 17:30:25.034983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:42.225 [2024-10-30 17:30:25.034989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:42.225 [2024-10-30 17:30:25.034995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:42.225 [2024-10-30 17:30:25.035001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:42.225 [2024-10-30 17:30:25.035006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:42.225 [2024-10-30 17:30:25.035012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:42.225 [2024-10-30 17:30:25.035018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:42.225 [2024-10-30 17:30:25.035024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:42.225 [2024-10-30 17:30:25.035030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:42.225 [2024-10-30 17:30:25.035035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:42.225 [2024-10-30 17:30:25.035041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:42.225 [2024-10-30 17:30:25.035047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:42.225 [2024-10-30 17:30:25.035052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:42.225 [2024-10-30 17:30:25.035058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:42.225 [2024-10-30 17:30:25.035063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:42.225 [2024-10-30 17:30:25.035069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:42.225 [2024-10-30 17:30:25.035076] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:42.225 [2024-10-30 17:30:25.035081] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 79468eb4-47cf-4986-9686-5f4d51d7ff2f 00:26:42.225 [2024-10-30 17:30:25.035087] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:42.225 [2024-10-30 17:30:25.035093] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:26:42.225 [2024-10-30 17:30:25.035098] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:26:42.225 [2024-10-30 17:30:25.035103] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:26:42.225 [2024-10-30 17:30:25.035108] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:42.225 [2024-10-30 17:30:25.035114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:42.225 [2024-10-30 17:30:25.035119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:42.225 [2024-10-30 17:30:25.035123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:42.225 [2024-10-30 17:30:25.035128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:42.225 [2024-10-30 17:30:25.035134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.225 [2024-10-30 17:30:25.035140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:42.225 [2024-10-30 17:30:25.035148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.179 ms 00:26:42.225 [2024-10-30 17:30:25.035154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.225 [2024-10-30 17:30:25.044660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.225 [2024-10-30 17:30:25.044684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:42.225 [2024-10-30 17:30:25.044691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.485 ms 00:26:42.225 [2024-10-30 17:30:25.044697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.225 [2024-10-30 17:30:25.044964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.225 [2024-10-30 17:30:25.044979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:42.225 [2024-10-30 17:30:25.044985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.253 ms 00:26:42.225 [2024-10-30 17:30:25.044991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.225 [2024-10-30 17:30:25.077833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:42.225 [2024-10-30 17:30:25.077936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:42.225 [2024-10-30 17:30:25.077948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:42.225 [2024-10-30 17:30:25.077954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.225 [2024-10-30 17:30:25.077977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:42.225 [2024-10-30 17:30:25.077987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:42.225 [2024-10-30 17:30:25.077993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:42.225 [2024-10-30 17:30:25.077999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.225 [2024-10-30 17:30:25.078058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:42.225 [2024-10-30 17:30:25.078066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:42.225 [2024-10-30 17:30:25.078072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:42.225 [2024-10-30 17:30:25.078078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.225 [2024-10-30 17:30:25.078091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:42.225 [2024-10-30 17:30:25.078097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:42.225 [2024-10-30 17:30:25.078105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:42.225 [2024-10-30 17:30:25.078110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.225 [2024-10-30 17:30:25.136541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:42.225 [2024-10-30 17:30:25.136660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:42.225 [2024-10-30 17:30:25.136672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:42.225 [2024-10-30 17:30:25.136678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.225 [2024-10-30 17:30:25.184725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:42.225 [2024-10-30 17:30:25.184760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:42.225 [2024-10-30 17:30:25.184769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:42.225 [2024-10-30 17:30:25.184775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.225 [2024-10-30 17:30:25.184823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:42.225 [2024-10-30 17:30:25.184831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:42.225 [2024-10-30 17:30:25.184837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:42.225 [2024-10-30 17:30:25.184843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.225 [2024-10-30 17:30:25.184885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:42.225 [2024-10-30 17:30:25.184891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:42.225 [2024-10-30 17:30:25.184898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:42.225 [2024-10-30 17:30:25.184911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.225 [2024-10-30 17:30:25.184980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:42.225 [2024-10-30 17:30:25.184987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:42.225 [2024-10-30 17:30:25.184993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:42.225 [2024-10-30 17:30:25.184998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.225 [2024-10-30 17:30:25.185024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:42.225 [2024-10-30 17:30:25.185031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:42.225 [2024-10-30 17:30:25.185037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:42.225 [2024-10-30 17:30:25.185043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.225 [2024-10-30 17:30:25.185072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:42.225 [2024-10-30 17:30:25.185078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:42.225 [2024-10-30 17:30:25.185085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:42.225 [2024-10-30 17:30:25.185091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.226 [2024-10-30 17:30:25.185124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:42.226 [2024-10-30 17:30:25.185130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:42.226 [2024-10-30 17:30:25.185137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:42.226 [2024-10-30 17:30:25.185145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.226 [2024-10-30 17:30:25.185250] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 195.705 ms, result 0 00:26:43.172 17:30:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:43.172 17:30:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:43.172 17:30:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:26:43.172 17:30:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:26:43.172 17:30:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:26:43.172 17:30:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:43.172 Remove shared memory files 00:26:43.172 17:30:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:26:43.172 17:30:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:43.172 17:30:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:43.172 17:30:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:43.172 17:30:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid80109 00:26:43.172 17:30:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:43.172 17:30:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:43.172 00:26:43.172 real 1m22.853s 00:26:43.172 user 1m54.506s 00:26:43.172 sys 0m19.215s 00:26:43.172 17:30:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:43.172 ************************************ 00:26:43.172 END TEST ftl_upgrade_shutdown 00:26:43.172 ************************************ 00:26:43.172 17:30:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:43.172 17:30:25 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:26:43.172 17:30:25 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:26:43.172 17:30:25 ftl -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:26:43.172 17:30:25 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:43.172 17:30:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:43.172 ************************************ 00:26:43.172 START TEST ftl_restore_fast 00:26:43.172 ************************************ 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:26:43.172 * Looking for test storage... 00:26:43.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # lcov --version 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:43.172 17:30:25 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:26:43.172 17:30:26 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:26:43.172 17:30:26 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:43.172 17:30:26 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:26:43.172 17:30:26 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:26:43.172 17:30:26 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:26:43.172 17:30:26 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:26:43.172 17:30:26 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:43.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.173 --rc genhtml_branch_coverage=1 00:26:43.173 --rc genhtml_function_coverage=1 00:26:43.173 --rc genhtml_legend=1 00:26:43.173 --rc geninfo_all_blocks=1 00:26:43.173 --rc geninfo_unexecuted_blocks=1 00:26:43.173 00:26:43.173 ' 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:43.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.173 --rc genhtml_branch_coverage=1 00:26:43.173 --rc genhtml_function_coverage=1 00:26:43.173 --rc genhtml_legend=1 00:26:43.173 --rc geninfo_all_blocks=1 00:26:43.173 --rc geninfo_unexecuted_blocks=1 00:26:43.173 00:26:43.173 ' 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:43.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.173 --rc genhtml_branch_coverage=1 00:26:43.173 --rc genhtml_function_coverage=1 00:26:43.173 --rc genhtml_legend=1 00:26:43.173 --rc geninfo_all_blocks=1 00:26:43.173 --rc geninfo_unexecuted_blocks=1 00:26:43.173 00:26:43.173 ' 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:43.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.173 --rc genhtml_branch_coverage=1 00:26:43.173 --rc genhtml_function_coverage=1 00:26:43.173 --rc genhtml_legend=1 00:26:43.173 --rc geninfo_all_blocks=1 00:26:43.173 --rc geninfo_unexecuted_blocks=1 00:26:43.173 00:26:43.173 ' 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.FRlkPEen93 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=80635 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 80635 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- common/autotest_common.sh@833 -- # '[' -z 80635 ']' 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:43.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:43.173 17:30:26 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:26:43.173 [2024-10-30 17:30:26.110799] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:26:43.173 [2024-10-30 17:30:26.111121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80635 ] 00:26:43.434 [2024-10-30 17:30:26.267650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.434 [2024-10-30 17:30:26.344691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.006 17:30:26 ftl.ftl_restore_fast -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:44.006 17:30:26 ftl.ftl_restore_fast -- common/autotest_common.sh@866 -- # return 0 00:26:44.006 17:30:26 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:44.006 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:26:44.006 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:44.006 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:26:44.006 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:26:44.006 17:30:26 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:44.267 17:30:27 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:44.267 17:30:27 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:26:44.267 17:30:27 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:44.267 17:30:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:26:44.267 17:30:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local bdev_info 00:26:44.267 17:30:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bs 00:26:44.267 17:30:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local nb 00:26:44.267 17:30:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:44.528 17:30:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:26:44.528 { 00:26:44.528 "name": "nvme0n1", 00:26:44.528 "aliases": [ 00:26:44.528 "c58406c5-0cd4-4b72-a278-b5e59c3e9819" 00:26:44.528 ], 00:26:44.528 "product_name": "NVMe disk", 00:26:44.528 "block_size": 4096, 00:26:44.528 "num_blocks": 1310720, 00:26:44.528 "uuid": "c58406c5-0cd4-4b72-a278-b5e59c3e9819", 00:26:44.528 "numa_id": -1, 00:26:44.528 "assigned_rate_limits": { 00:26:44.528 "rw_ios_per_sec": 0, 00:26:44.528 "rw_mbytes_per_sec": 0, 00:26:44.528 "r_mbytes_per_sec": 0, 00:26:44.528 "w_mbytes_per_sec": 0 00:26:44.528 }, 00:26:44.528 "claimed": true, 00:26:44.528 "claim_type": "read_many_write_one", 00:26:44.528 "zoned": false, 00:26:44.528 "supported_io_types": { 00:26:44.528 "read": true, 00:26:44.528 "write": true, 00:26:44.528 "unmap": true, 00:26:44.528 "flush": true, 00:26:44.528 "reset": true, 00:26:44.528 "nvme_admin": true, 00:26:44.528 "nvme_io": true, 00:26:44.528 "nvme_io_md": false, 00:26:44.528 "write_zeroes": true, 00:26:44.528 "zcopy": false, 00:26:44.528 "get_zone_info": false, 00:26:44.528 "zone_management": false, 00:26:44.528 "zone_append": false, 00:26:44.528 "compare": true, 00:26:44.528 "compare_and_write": false, 00:26:44.528 "abort": true, 00:26:44.528 "seek_hole": false, 00:26:44.528 "seek_data": false, 00:26:44.528 "copy": true, 00:26:44.528 "nvme_iov_md": false 00:26:44.528 }, 00:26:44.528 "driver_specific": { 00:26:44.528 "nvme": [ 00:26:44.528 { 00:26:44.528 "pci_address": "0000:00:11.0", 00:26:44.528 "trid": { 00:26:44.528 "trtype": "PCIe", 00:26:44.528 "traddr": "0000:00:11.0" 00:26:44.528 }, 00:26:44.528 "ctrlr_data": { 00:26:44.528 "cntlid": 0, 00:26:44.528 "vendor_id": "0x1b36", 00:26:44.528 "model_number": "QEMU NVMe Ctrl", 00:26:44.528 "serial_number": "12341", 00:26:44.528 "firmware_revision": "8.0.0", 00:26:44.528 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:44.528 "oacs": { 00:26:44.528 "security": 0, 00:26:44.528 "format": 1, 00:26:44.528 "firmware": 0, 00:26:44.528 "ns_manage": 1 00:26:44.528 }, 00:26:44.528 "multi_ctrlr": false, 00:26:44.528 "ana_reporting": false 00:26:44.528 }, 00:26:44.528 "vs": { 00:26:44.528 "nvme_version": "1.4" 00:26:44.528 }, 00:26:44.529 "ns_data": { 00:26:44.529 "id": 1, 00:26:44.529 "can_share": false 00:26:44.529 } 00:26:44.529 } 00:26:44.529 ], 00:26:44.529 "mp_policy": "active_passive" 00:26:44.529 } 00:26:44.529 } 00:26:44.529 ]' 00:26:44.529 17:30:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:26:44.529 17:30:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # bs=4096 00:26:44.529 17:30:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:26:44.529 17:30:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # nb=1310720 00:26:44.529 17:30:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:26:44.529 17:30:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1390 -- # echo 5120 00:26:44.529 17:30:27 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:26:44.529 17:30:27 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:44.529 17:30:27 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:26:44.529 17:30:27 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:44.529 17:30:27 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:44.791 17:30:27 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=3a43a14d-db21-4a05-aa8c-9ac544b8c5fa 00:26:44.791 17:30:27 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:26:44.791 17:30:27 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3a43a14d-db21-4a05-aa8c-9ac544b8c5fa 00:26:45.053 17:30:27 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:45.314 17:30:28 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=d795f756-83f3-46fd-8815-0cbacccc1f9a 00:26:45.314 17:30:28 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d795f756-83f3-46fd-8815-0cbacccc1f9a 00:26:45.314 17:30:28 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=7c009a2c-e76c-44ea-b6b2-6636c725243c 00:26:45.314 17:30:28 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:26:45.314 17:30:28 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7c009a2c-e76c-44ea-b6b2-6636c725243c 00:26:45.314 17:30:28 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:26:45.314 17:30:28 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:45.314 17:30:28 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=7c009a2c-e76c-44ea-b6b2-6636c725243c 00:26:45.314 17:30:28 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:26:45.314 17:30:28 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 7c009a2c-e76c-44ea-b6b2-6636c725243c 00:26:45.314 17:30:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bdev_name=7c009a2c-e76c-44ea-b6b2-6636c725243c 00:26:45.314 17:30:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local bdev_info 00:26:45.314 17:30:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bs 00:26:45.314 17:30:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local nb 00:26:45.314 17:30:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7c009a2c-e76c-44ea-b6b2-6636c725243c 00:26:45.575 17:30:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:26:45.575 { 00:26:45.575 "name": "7c009a2c-e76c-44ea-b6b2-6636c725243c", 00:26:45.575 "aliases": [ 00:26:45.575 "lvs/nvme0n1p0" 00:26:45.575 ], 00:26:45.575 "product_name": "Logical Volume", 00:26:45.575 "block_size": 4096, 00:26:45.575 "num_blocks": 26476544, 00:26:45.575 "uuid": "7c009a2c-e76c-44ea-b6b2-6636c725243c", 00:26:45.575 "assigned_rate_limits": { 00:26:45.575 "rw_ios_per_sec": 0, 00:26:45.575 "rw_mbytes_per_sec": 0, 00:26:45.575 "r_mbytes_per_sec": 0, 00:26:45.575 "w_mbytes_per_sec": 0 00:26:45.575 }, 00:26:45.575 "claimed": false, 00:26:45.575 "zoned": false, 00:26:45.575 "supported_io_types": { 00:26:45.575 "read": true, 00:26:45.575 "write": true, 00:26:45.575 "unmap": true, 00:26:45.575 "flush": false, 00:26:45.575 "reset": true, 00:26:45.575 "nvme_admin": false, 00:26:45.575 "nvme_io": false, 00:26:45.575 "nvme_io_md": false, 00:26:45.575 "write_zeroes": true, 00:26:45.575 "zcopy": false, 00:26:45.575 "get_zone_info": false, 00:26:45.575 "zone_management": false, 00:26:45.575 "zone_append": false, 00:26:45.575 "compare": false, 00:26:45.575 "compare_and_write": false, 00:26:45.575 "abort": false, 00:26:45.575 "seek_hole": true, 00:26:45.575 "seek_data": true, 00:26:45.575 "copy": false, 00:26:45.575 "nvme_iov_md": false 00:26:45.575 }, 00:26:45.575 "driver_specific": { 00:26:45.575 "lvol": { 00:26:45.575 "lvol_store_uuid": "d795f756-83f3-46fd-8815-0cbacccc1f9a", 00:26:45.575 "base_bdev": "nvme0n1", 00:26:45.575 "thin_provision": true, 00:26:45.575 "num_allocated_clusters": 0, 00:26:45.575 "snapshot": false, 00:26:45.575 "clone": false, 00:26:45.575 "esnap_clone": false 00:26:45.575 } 00:26:45.575 } 00:26:45.575 } 00:26:45.575 ]' 00:26:45.575 17:30:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:26:45.575 17:30:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # bs=4096 00:26:45.575 17:30:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:26:45.575 17:30:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # nb=26476544 00:26:45.575 17:30:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:26:45.575 17:30:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1390 -- # echo 103424 00:26:45.575 17:30:28 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:26:45.575 17:30:28 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:26:45.575 17:30:28 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:45.836 17:30:28 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:45.836 17:30:28 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:45.836 17:30:28 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 7c009a2c-e76c-44ea-b6b2-6636c725243c 00:26:45.836 17:30:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bdev_name=7c009a2c-e76c-44ea-b6b2-6636c725243c 00:26:45.836 17:30:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local bdev_info 00:26:45.836 17:30:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bs 00:26:45.836 17:30:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local nb 00:26:45.836 17:30:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7c009a2c-e76c-44ea-b6b2-6636c725243c 00:26:46.097 17:30:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:26:46.097 { 00:26:46.097 "name": "7c009a2c-e76c-44ea-b6b2-6636c725243c", 00:26:46.097 "aliases": [ 00:26:46.097 "lvs/nvme0n1p0" 00:26:46.097 ], 00:26:46.097 "product_name": "Logical Volume", 00:26:46.097 "block_size": 4096, 00:26:46.097 "num_blocks": 26476544, 00:26:46.097 "uuid": "7c009a2c-e76c-44ea-b6b2-6636c725243c", 00:26:46.097 "assigned_rate_limits": { 00:26:46.097 "rw_ios_per_sec": 0, 00:26:46.097 "rw_mbytes_per_sec": 0, 00:26:46.097 "r_mbytes_per_sec": 0, 00:26:46.097 "w_mbytes_per_sec": 0 00:26:46.097 }, 00:26:46.097 "claimed": false, 00:26:46.097 "zoned": false, 00:26:46.097 "supported_io_types": { 00:26:46.097 "read": true, 00:26:46.097 "write": true, 00:26:46.097 "unmap": true, 00:26:46.097 "flush": false, 00:26:46.097 "reset": true, 00:26:46.097 "nvme_admin": false, 00:26:46.097 "nvme_io": false, 00:26:46.097 "nvme_io_md": false, 00:26:46.097 "write_zeroes": true, 00:26:46.097 "zcopy": false, 00:26:46.097 "get_zone_info": false, 00:26:46.097 "zone_management": false, 00:26:46.097 "zone_append": false, 00:26:46.097 "compare": false, 00:26:46.097 "compare_and_write": false, 00:26:46.097 "abort": false, 00:26:46.097 "seek_hole": true, 00:26:46.097 "seek_data": true, 00:26:46.097 "copy": false, 00:26:46.097 "nvme_iov_md": false 00:26:46.097 }, 00:26:46.097 "driver_specific": { 00:26:46.097 "lvol": { 00:26:46.097 "lvol_store_uuid": "d795f756-83f3-46fd-8815-0cbacccc1f9a", 00:26:46.097 "base_bdev": "nvme0n1", 00:26:46.097 "thin_provision": true, 00:26:46.097 "num_allocated_clusters": 0, 00:26:46.097 "snapshot": false, 00:26:46.097 "clone": false, 00:26:46.097 "esnap_clone": false 00:26:46.097 } 00:26:46.097 } 00:26:46.097 } 00:26:46.097 ]' 00:26:46.097 17:30:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:26:46.097 17:30:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # bs=4096 00:26:46.097 17:30:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:26:46.097 17:30:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # nb=26476544 00:26:46.097 17:30:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:26:46.097 17:30:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1390 -- # echo 103424 00:26:46.097 17:30:29 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:26:46.097 17:30:29 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:46.359 17:30:29 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:26:46.359 17:30:29 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 7c009a2c-e76c-44ea-b6b2-6636c725243c 00:26:46.359 17:30:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bdev_name=7c009a2c-e76c-44ea-b6b2-6636c725243c 00:26:46.359 17:30:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local bdev_info 00:26:46.359 17:30:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bs 00:26:46.359 17:30:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local nb 00:26:46.359 17:30:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7c009a2c-e76c-44ea-b6b2-6636c725243c 00:26:46.621 17:30:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:26:46.621 { 00:26:46.621 "name": "7c009a2c-e76c-44ea-b6b2-6636c725243c", 00:26:46.621 "aliases": [ 00:26:46.621 "lvs/nvme0n1p0" 00:26:46.621 ], 00:26:46.621 "product_name": "Logical Volume", 00:26:46.621 "block_size": 4096, 00:26:46.621 "num_blocks": 26476544, 00:26:46.621 "uuid": "7c009a2c-e76c-44ea-b6b2-6636c725243c", 00:26:46.621 "assigned_rate_limits": { 00:26:46.621 "rw_ios_per_sec": 0, 00:26:46.621 "rw_mbytes_per_sec": 0, 00:26:46.621 "r_mbytes_per_sec": 0, 00:26:46.621 "w_mbytes_per_sec": 0 00:26:46.621 }, 00:26:46.621 "claimed": false, 00:26:46.621 "zoned": false, 00:26:46.621 "supported_io_types": { 00:26:46.621 "read": true, 00:26:46.621 "write": true, 00:26:46.621 "unmap": true, 00:26:46.621 "flush": false, 00:26:46.621 "reset": true, 00:26:46.621 "nvme_admin": false, 00:26:46.621 "nvme_io": false, 00:26:46.621 "nvme_io_md": false, 00:26:46.621 "write_zeroes": true, 00:26:46.621 "zcopy": false, 00:26:46.621 "get_zone_info": false, 00:26:46.621 "zone_management": false, 00:26:46.621 "zone_append": false, 00:26:46.621 "compare": false, 00:26:46.621 "compare_and_write": false, 00:26:46.621 "abort": false, 00:26:46.621 "seek_hole": true, 00:26:46.621 "seek_data": true, 00:26:46.621 "copy": false, 00:26:46.621 "nvme_iov_md": false 00:26:46.621 }, 00:26:46.621 "driver_specific": { 00:26:46.621 "lvol": { 00:26:46.621 "lvol_store_uuid": "d795f756-83f3-46fd-8815-0cbacccc1f9a", 00:26:46.621 "base_bdev": "nvme0n1", 00:26:46.621 "thin_provision": true, 00:26:46.621 "num_allocated_clusters": 0, 00:26:46.621 "snapshot": false, 00:26:46.621 "clone": false, 00:26:46.621 "esnap_clone": false 00:26:46.621 } 00:26:46.621 } 00:26:46.621 } 00:26:46.621 ]' 00:26:46.621 17:30:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:26:46.621 17:30:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # bs=4096 00:26:46.621 17:30:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:26:46.621 17:30:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # nb=26476544 00:26:46.621 17:30:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:26:46.621 17:30:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1390 -- # echo 103424 00:26:46.621 17:30:29 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:26:46.621 17:30:29 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7c009a2c-e76c-44ea-b6b2-6636c725243c --l2p_dram_limit 10' 00:26:46.621 17:30:29 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:26:46.621 17:30:29 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:46.621 17:30:29 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:46.621 17:30:29 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:26:46.621 17:30:29 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:26:46.621 17:30:29 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7c009a2c-e76c-44ea-b6b2-6636c725243c --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:26:46.882 [2024-10-30 17:30:29.697359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.882 [2024-10-30 17:30:29.697395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:46.882 [2024-10-30 17:30:29.697407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:46.882 [2024-10-30 17:30:29.697414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.882 [2024-10-30 17:30:29.697460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.882 [2024-10-30 17:30:29.697468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:46.882 [2024-10-30 17:30:29.697475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:46.882 [2024-10-30 17:30:29.697481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.882 [2024-10-30 17:30:29.697500] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:46.882 [2024-10-30 17:30:29.698075] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:46.882 [2024-10-30 17:30:29.698091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.882 [2024-10-30 17:30:29.698097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:46.882 [2024-10-30 17:30:29.698105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:26:46.882 [2024-10-30 17:30:29.698112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.882 [2024-10-30 17:30:29.698140] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c85ed329-2507-423c-8fa1-d5f290e67da9 00:26:46.882 [2024-10-30 17:30:29.699088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.882 [2024-10-30 17:30:29.699111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:46.882 [2024-10-30 17:30:29.699118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:46.882 [2024-10-30 17:30:29.699126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.882 [2024-10-30 17:30:29.703932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.882 [2024-10-30 17:30:29.703960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:46.882 [2024-10-30 17:30:29.703967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.766 ms 00:26:46.882 [2024-10-30 17:30:29.703977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.882 [2024-10-30 17:30:29.704082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.882 [2024-10-30 17:30:29.704092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:46.882 [2024-10-30 17:30:29.704099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:46.882 [2024-10-30 17:30:29.704108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.882 [2024-10-30 17:30:29.704138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.882 [2024-10-30 17:30:29.704146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:46.882 [2024-10-30 17:30:29.704153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:46.882 [2024-10-30 17:30:29.704160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.882 [2024-10-30 17:30:29.704179] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:46.882 [2024-10-30 17:30:29.707083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.882 [2024-10-30 17:30:29.707105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:46.882 [2024-10-30 17:30:29.707115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.909 ms 00:26:46.882 [2024-10-30 17:30:29.707123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.882 [2024-10-30 17:30:29.707150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.882 [2024-10-30 17:30:29.707157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:46.882 [2024-10-30 17:30:29.707164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:46.882 [2024-10-30 17:30:29.707170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.882 [2024-10-30 17:30:29.707190] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:46.882 [2024-10-30 17:30:29.707303] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:46.882 [2024-10-30 17:30:29.707316] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:46.882 [2024-10-30 17:30:29.707325] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:46.882 [2024-10-30 17:30:29.707334] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:46.882 [2024-10-30 17:30:29.707341] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:46.882 [2024-10-30 17:30:29.707349] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:46.882 [2024-10-30 17:30:29.707354] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:46.882 [2024-10-30 17:30:29.707361] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:46.882 [2024-10-30 17:30:29.707366] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:46.882 [2024-10-30 17:30:29.707375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.882 [2024-10-30 17:30:29.707380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:46.883 [2024-10-30 17:30:29.707388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:26:46.883 [2024-10-30 17:30:29.707399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.883 [2024-10-30 17:30:29.707464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.883 [2024-10-30 17:30:29.707470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:46.883 [2024-10-30 17:30:29.707477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:46.883 [2024-10-30 17:30:29.707483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.883 [2024-10-30 17:30:29.707556] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:46.883 [2024-10-30 17:30:29.707565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:46.883 [2024-10-30 17:30:29.707572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:46.883 [2024-10-30 17:30:29.707578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.883 [2024-10-30 17:30:29.707585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:46.883 [2024-10-30 17:30:29.707590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:46.883 [2024-10-30 17:30:29.707596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:46.883 [2024-10-30 17:30:29.707601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:46.883 [2024-10-30 17:30:29.707608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:46.883 [2024-10-30 17:30:29.707613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:46.883 [2024-10-30 17:30:29.707620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:46.883 [2024-10-30 17:30:29.707625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:46.883 [2024-10-30 17:30:29.707632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:46.883 [2024-10-30 17:30:29.707637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:46.883 [2024-10-30 17:30:29.707645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:46.883 [2024-10-30 17:30:29.707650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.883 [2024-10-30 17:30:29.707658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:46.883 [2024-10-30 17:30:29.707663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:46.883 [2024-10-30 17:30:29.707670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.883 [2024-10-30 17:30:29.707675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:46.883 [2024-10-30 17:30:29.707681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:46.883 [2024-10-30 17:30:29.707687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:46.883 [2024-10-30 17:30:29.707693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:46.883 [2024-10-30 17:30:29.707698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:46.883 [2024-10-30 17:30:29.707704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:46.883 [2024-10-30 17:30:29.707709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:46.883 [2024-10-30 17:30:29.707715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:46.883 [2024-10-30 17:30:29.707720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:46.883 [2024-10-30 17:30:29.707726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:46.883 [2024-10-30 17:30:29.707731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:46.883 [2024-10-30 17:30:29.707738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:46.883 [2024-10-30 17:30:29.707742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:46.883 [2024-10-30 17:30:29.707750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:46.883 [2024-10-30 17:30:29.707755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:46.883 [2024-10-30 17:30:29.707761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:46.883 [2024-10-30 17:30:29.707766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:46.883 [2024-10-30 17:30:29.707772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:46.883 [2024-10-30 17:30:29.707777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:46.883 [2024-10-30 17:30:29.707783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:46.883 [2024-10-30 17:30:29.707788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.883 [2024-10-30 17:30:29.707795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:46.883 [2024-10-30 17:30:29.707800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:46.883 [2024-10-30 17:30:29.707806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.883 [2024-10-30 17:30:29.707810] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:46.883 [2024-10-30 17:30:29.707818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:46.883 [2024-10-30 17:30:29.707823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:46.883 [2024-10-30 17:30:29.707833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.883 [2024-10-30 17:30:29.707839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:46.883 [2024-10-30 17:30:29.707847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:46.883 [2024-10-30 17:30:29.707852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:46.883 [2024-10-30 17:30:29.707858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:46.883 [2024-10-30 17:30:29.707863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:46.883 [2024-10-30 17:30:29.707870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:46.883 [2024-10-30 17:30:29.707877] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:46.883 [2024-10-30 17:30:29.707886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:46.883 [2024-10-30 17:30:29.707892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:46.883 [2024-10-30 17:30:29.707899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:46.883 [2024-10-30 17:30:29.707904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:46.883 [2024-10-30 17:30:29.707911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:46.883 [2024-10-30 17:30:29.707916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:46.883 [2024-10-30 17:30:29.707923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:46.883 [2024-10-30 17:30:29.707928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:46.883 [2024-10-30 17:30:29.707934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:46.883 [2024-10-30 17:30:29.707940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:46.883 [2024-10-30 17:30:29.707948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:46.883 [2024-10-30 17:30:29.707953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:46.883 [2024-10-30 17:30:29.707959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:46.883 [2024-10-30 17:30:29.707965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:46.883 [2024-10-30 17:30:29.707971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:46.883 [2024-10-30 17:30:29.707977] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:46.883 [2024-10-30 17:30:29.707985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:46.883 [2024-10-30 17:30:29.707992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:46.883 [2024-10-30 17:30:29.707999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:46.883 [2024-10-30 17:30:29.708005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:46.883 [2024-10-30 17:30:29.708012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:46.883 [2024-10-30 17:30:29.708017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.883 [2024-10-30 17:30:29.708024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:46.883 [2024-10-30 17:30:29.708031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:26:46.883 [2024-10-30 17:30:29.708038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.883 [2024-10-30 17:30:29.708079] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:46.883 [2024-10-30 17:30:29.708089] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:51.122 [2024-10-30 17:30:33.982235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.122 [2024-10-30 17:30:33.982319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:51.122 [2024-10-30 17:30:33.982338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4274.138 ms 00:26:51.122 [2024-10-30 17:30:33.982350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.122 [2024-10-30 17:30:34.015085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.122 [2024-10-30 17:30:34.015149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:51.122 [2024-10-30 17:30:34.015164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.481 ms 00:26:51.122 [2024-10-30 17:30:34.015178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.122 [2024-10-30 17:30:34.015350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.122 [2024-10-30 17:30:34.015366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:51.122 [2024-10-30 17:30:34.015376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:26:51.122 [2024-10-30 17:30:34.015389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.122 [2024-10-30 17:30:34.051274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.122 [2024-10-30 17:30:34.051326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:51.122 [2024-10-30 17:30:34.051338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.833 ms 00:26:51.122 [2024-10-30 17:30:34.051350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.122 [2024-10-30 17:30:34.051387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.122 [2024-10-30 17:30:34.051399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:51.122 [2024-10-30 17:30:34.051408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:51.122 [2024-10-30 17:30:34.051421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.122 [2024-10-30 17:30:34.052030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.122 [2024-10-30 17:30:34.052071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:51.122 [2024-10-30 17:30:34.052083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:26:51.122 [2024-10-30 17:30:34.052093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.122 [2024-10-30 17:30:34.052232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.122 [2024-10-30 17:30:34.052244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:51.122 [2024-10-30 17:30:34.052253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:26:51.122 [2024-10-30 17:30:34.052266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.122 [2024-10-30 17:30:34.069831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.122 [2024-10-30 17:30:34.069876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:51.122 [2024-10-30 17:30:34.069888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.541 ms 00:26:51.122 [2024-10-30 17:30:34.069901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.122 [2024-10-30 17:30:34.083150] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:51.122 [2024-10-30 17:30:34.087035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.122 [2024-10-30 17:30:34.087074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:51.122 [2024-10-30 17:30:34.087088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.040 ms 00:26:51.122 [2024-10-30 17:30:34.087096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.383 [2024-10-30 17:30:34.205887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.383 [2024-10-30 17:30:34.205951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:51.383 [2024-10-30 17:30:34.205971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.749 ms 00:26:51.383 [2024-10-30 17:30:34.205981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.383 [2024-10-30 17:30:34.206196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.383 [2024-10-30 17:30:34.206235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:51.383 [2024-10-30 17:30:34.206250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:26:51.383 [2024-10-30 17:30:34.206262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.383 [2024-10-30 17:30:34.233263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.383 [2024-10-30 17:30:34.233313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:51.383 [2024-10-30 17:30:34.233330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.942 ms 00:26:51.383 [2024-10-30 17:30:34.233340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.383 [2024-10-30 17:30:34.259448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.383 [2024-10-30 17:30:34.259494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:51.383 [2024-10-30 17:30:34.259510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.050 ms 00:26:51.383 [2024-10-30 17:30:34.259518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.383 [2024-10-30 17:30:34.260141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.383 [2024-10-30 17:30:34.260160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:51.383 [2024-10-30 17:30:34.260172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:26:51.383 [2024-10-30 17:30:34.260179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.383 [2024-10-30 17:30:34.354070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.383 [2024-10-30 17:30:34.354123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:51.383 [2024-10-30 17:30:34.354143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.775 ms 00:26:51.383 [2024-10-30 17:30:34.354153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.645 [2024-10-30 17:30:34.382810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.645 [2024-10-30 17:30:34.382859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:51.645 [2024-10-30 17:30:34.382878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.549 ms 00:26:51.645 [2024-10-30 17:30:34.382887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.645 [2024-10-30 17:30:34.410121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.645 [2024-10-30 17:30:34.410166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:51.645 [2024-10-30 17:30:34.410181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.174 ms 00:26:51.645 [2024-10-30 17:30:34.410188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.645 [2024-10-30 17:30:34.437104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.645 [2024-10-30 17:30:34.437150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:51.645 [2024-10-30 17:30:34.437165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.848 ms 00:26:51.645 [2024-10-30 17:30:34.437173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.645 [2024-10-30 17:30:34.437247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.645 [2024-10-30 17:30:34.437257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:51.645 [2024-10-30 17:30:34.437273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:51.645 [2024-10-30 17:30:34.437281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.645 [2024-10-30 17:30:34.437407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.645 [2024-10-30 17:30:34.437417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:51.645 [2024-10-30 17:30:34.437428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:51.645 [2024-10-30 17:30:34.437436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.645 [2024-10-30 17:30:34.438679] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4740.753 ms, result 0 00:26:51.645 { 00:26:51.645 "name": "ftl0", 00:26:51.645 "uuid": "c85ed329-2507-423c-8fa1-d5f290e67da9" 00:26:51.645 } 00:26:51.645 17:30:34 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:26:51.645 17:30:34 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:51.907 17:30:34 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:26:51.907 17:30:34 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:51.907 [2024-10-30 17:30:34.877957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.907 [2024-10-30 17:30:34.878013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:51.907 [2024-10-30 17:30:34.878028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:51.907 [2024-10-30 17:30:34.878048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.907 [2024-10-30 17:30:34.878073] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:51.907 [2024-10-30 17:30:34.881138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.907 [2024-10-30 17:30:34.881178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:51.907 [2024-10-30 17:30:34.881208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.040 ms 00:26:51.907 [2024-10-30 17:30:34.881218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.907 [2024-10-30 17:30:34.881496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.907 [2024-10-30 17:30:34.881507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:51.907 [2024-10-30 17:30:34.881519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:26:51.907 [2024-10-30 17:30:34.881531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.907 [2024-10-30 17:30:34.885070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.907 [2024-10-30 17:30:34.885090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:51.907 [2024-10-30 17:30:34.885103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.521 ms 00:26:51.907 [2024-10-30 17:30:34.885112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.168 [2024-10-30 17:30:34.891374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.168 [2024-10-30 17:30:34.891412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:52.168 [2024-10-30 17:30:34.891426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.234 ms 00:26:52.168 [2024-10-30 17:30:34.891434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.168 [2024-10-30 17:30:34.918948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.168 [2024-10-30 17:30:34.918991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:52.168 [2024-10-30 17:30:34.919006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.422 ms 00:26:52.168 [2024-10-30 17:30:34.919014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.168 [2024-10-30 17:30:34.938348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.168 [2024-10-30 17:30:34.938396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:52.168 [2024-10-30 17:30:34.938413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.268 ms 00:26:52.168 [2024-10-30 17:30:34.938422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.168 [2024-10-30 17:30:34.938617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.168 [2024-10-30 17:30:34.938630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:52.168 [2024-10-30 17:30:34.938642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:26:52.168 [2024-10-30 17:30:34.938649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.168 [2024-10-30 17:30:34.965259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.168 [2024-10-30 17:30:34.965302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:52.168 [2024-10-30 17:30:34.965317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.585 ms 00:26:52.168 [2024-10-30 17:30:34.965325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.168 [2024-10-30 17:30:34.991279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.168 [2024-10-30 17:30:34.991322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:52.168 [2024-10-30 17:30:34.991336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.894 ms 00:26:52.168 [2024-10-30 17:30:34.991343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.168 [2024-10-30 17:30:35.016897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.168 [2024-10-30 17:30:35.016938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:52.168 [2024-10-30 17:30:35.016952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.493 ms 00:26:52.168 [2024-10-30 17:30:35.016958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.168 [2024-10-30 17:30:35.042168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.168 [2024-10-30 17:30:35.042208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:52.168 [2024-10-30 17:30:35.042222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.093 ms 00:26:52.168 [2024-10-30 17:30:35.042228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.168 [2024-10-30 17:30:35.042267] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:52.168 [2024-10-30 17:30:35.042281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:52.168 [2024-10-30 17:30:35.042500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.042997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.043005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.043014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.043021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.043032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.043039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.043048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.043055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.043064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.043071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.043081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.043089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.043098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.043105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.043114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.043122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.043132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:52.169 [2024-10-30 17:30:35.043147] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:52.169 [2024-10-30 17:30:35.043157] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c85ed329-2507-423c-8fa1-d5f290e67da9 00:26:52.169 [2024-10-30 17:30:35.043165] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:52.169 [2024-10-30 17:30:35.043178] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:52.169 [2024-10-30 17:30:35.043184] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:52.169 [2024-10-30 17:30:35.043194] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:52.169 [2024-10-30 17:30:35.043215] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:52.169 [2024-10-30 17:30:35.043224] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:52.169 [2024-10-30 17:30:35.043231] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:52.169 [2024-10-30 17:30:35.043238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:52.169 [2024-10-30 17:30:35.043245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:52.169 [2024-10-30 17:30:35.043254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.169 [2024-10-30 17:30:35.043261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:52.169 [2024-10-30 17:30:35.043271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:26:52.169 [2024-10-30 17:30:35.043278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.169 [2024-10-30 17:30:35.056251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.169 [2024-10-30 17:30:35.056277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:52.169 [2024-10-30 17:30:35.056289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.917 ms 00:26:52.169 [2024-10-30 17:30:35.056296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.169 [2024-10-30 17:30:35.056657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.169 [2024-10-30 17:30:35.056673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:52.169 [2024-10-30 17:30:35.056683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:26:52.169 [2024-10-30 17:30:35.056690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.169 [2024-10-30 17:30:35.098942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:52.169 [2024-10-30 17:30:35.098972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:52.170 [2024-10-30 17:30:35.098984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:52.170 [2024-10-30 17:30:35.098991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.170 [2024-10-30 17:30:35.099046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:52.170 [2024-10-30 17:30:35.099054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:52.170 [2024-10-30 17:30:35.099063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:52.170 [2024-10-30 17:30:35.099070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.170 [2024-10-30 17:30:35.099137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:52.170 [2024-10-30 17:30:35.099147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:52.170 [2024-10-30 17:30:35.099156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:52.170 [2024-10-30 17:30:35.099163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.170 [2024-10-30 17:30:35.099183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:52.170 [2024-10-30 17:30:35.099191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:52.170 [2024-10-30 17:30:35.099211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:52.170 [2024-10-30 17:30:35.099218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.429 [2024-10-30 17:30:35.174633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:52.429 [2024-10-30 17:30:35.174664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:52.429 [2024-10-30 17:30:35.174676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:52.429 [2024-10-30 17:30:35.174684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.429 [2024-10-30 17:30:35.236888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:52.429 [2024-10-30 17:30:35.236922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:52.429 [2024-10-30 17:30:35.236934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:52.429 [2024-10-30 17:30:35.236941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.429 [2024-10-30 17:30:35.237018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:52.429 [2024-10-30 17:30:35.237029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:52.429 [2024-10-30 17:30:35.237039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:52.429 [2024-10-30 17:30:35.237046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.429 [2024-10-30 17:30:35.237092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:52.429 [2024-10-30 17:30:35.237101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:52.429 [2024-10-30 17:30:35.237110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:52.429 [2024-10-30 17:30:35.237117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.429 [2024-10-30 17:30:35.237224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:52.429 [2024-10-30 17:30:35.237235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:52.429 [2024-10-30 17:30:35.237246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:52.429 [2024-10-30 17:30:35.237254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.429 [2024-10-30 17:30:35.237286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:52.429 [2024-10-30 17:30:35.237294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:52.429 [2024-10-30 17:30:35.237303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:52.429 [2024-10-30 17:30:35.237310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.429 [2024-10-30 17:30:35.237345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:52.429 [2024-10-30 17:30:35.237353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:52.429 [2024-10-30 17:30:35.237364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:52.429 [2024-10-30 17:30:35.237371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.429 [2024-10-30 17:30:35.237415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:52.429 [2024-10-30 17:30:35.237424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:52.429 [2024-10-30 17:30:35.237433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:52.429 [2024-10-30 17:30:35.237440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.429 [2024-10-30 17:30:35.237563] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 359.579 ms, result 0 00:26:52.429 true 00:26:52.429 17:30:35 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 80635 00:26:52.429 17:30:35 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # '[' -z 80635 ']' 00:26:52.429 17:30:35 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # kill -0 80635 00:26:52.429 17:30:35 ftl.ftl_restore_fast -- common/autotest_common.sh@957 -- # uname 00:26:52.429 17:30:35 ftl.ftl_restore_fast -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:52.429 17:30:35 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80635 00:26:52.429 killing process with pid 80635 00:26:52.429 17:30:35 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:52.429 17:30:35 ftl.ftl_restore_fast -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:52.429 17:30:35 ftl.ftl_restore_fast -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80635' 00:26:52.429 17:30:35 ftl.ftl_restore_fast -- common/autotest_common.sh@971 -- # kill 80635 00:26:52.429 17:30:35 ftl.ftl_restore_fast -- common/autotest_common.sh@976 -- # wait 80635 00:26:57.721 17:30:40 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:27:01.934 262144+0 records in 00:27:01.934 262144+0 records out 00:27:01.934 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.25006 s, 253 MB/s 00:27:01.934 17:30:44 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:03.845 17:30:46 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:04.107 [2024-10-30 17:30:46.838809] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:27:04.107 [2024-10-30 17:30:46.839046] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80862 ] 00:27:04.107 [2024-10-30 17:30:46.990117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.368 [2024-10-30 17:30:47.150210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.631 [2024-10-30 17:30:47.414056] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:04.631 [2024-10-30 17:30:47.414138] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:04.631 [2024-10-30 17:30:47.573253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.631 [2024-10-30 17:30:47.573313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:04.631 [2024-10-30 17:30:47.573332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:04.631 [2024-10-30 17:30:47.573341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.631 [2024-10-30 17:30:47.573397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.631 [2024-10-30 17:30:47.573408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:04.631 [2024-10-30 17:30:47.573419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:04.631 [2024-10-30 17:30:47.573428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.631 [2024-10-30 17:30:47.573449] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:04.631 [2024-10-30 17:30:47.574221] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:04.631 [2024-10-30 17:30:47.574249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.631 [2024-10-30 17:30:47.574260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:04.631 [2024-10-30 17:30:47.574269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:27:04.631 [2024-10-30 17:30:47.574278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.631 [2024-10-30 17:30:47.576003] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:04.631 [2024-10-30 17:30:47.590543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.631 [2024-10-30 17:30:47.590591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:04.631 [2024-10-30 17:30:47.590606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.543 ms 00:27:04.631 [2024-10-30 17:30:47.590622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.631 [2024-10-30 17:30:47.590694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.631 [2024-10-30 17:30:47.590706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:04.631 [2024-10-30 17:30:47.590716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:27:04.632 [2024-10-30 17:30:47.590724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.632 [2024-10-30 17:30:47.598705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.632 [2024-10-30 17:30:47.598885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:04.632 [2024-10-30 17:30:47.598904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.904 ms 00:27:04.632 [2024-10-30 17:30:47.598912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.632 [2024-10-30 17:30:47.599000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.632 [2024-10-30 17:30:47.599009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:04.632 [2024-10-30 17:30:47.599018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:04.632 [2024-10-30 17:30:47.599026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.632 [2024-10-30 17:30:47.599069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.632 [2024-10-30 17:30:47.599079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:04.632 [2024-10-30 17:30:47.599088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:04.632 [2024-10-30 17:30:47.599095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.632 [2024-10-30 17:30:47.599119] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:04.632 [2024-10-30 17:30:47.603015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.632 [2024-10-30 17:30:47.603051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:04.632 [2024-10-30 17:30:47.603062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.901 ms 00:27:04.632 [2024-10-30 17:30:47.603074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.632 [2024-10-30 17:30:47.603109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.632 [2024-10-30 17:30:47.603118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:04.632 [2024-10-30 17:30:47.603126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:04.632 [2024-10-30 17:30:47.603134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.632 [2024-10-30 17:30:47.603184] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:04.632 [2024-10-30 17:30:47.603224] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:04.632 [2024-10-30 17:30:47.603263] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:04.632 [2024-10-30 17:30:47.603282] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:04.632 [2024-10-30 17:30:47.603388] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:04.632 [2024-10-30 17:30:47.603400] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:04.632 [2024-10-30 17:30:47.603412] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:04.632 [2024-10-30 17:30:47.603423] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:04.632 [2024-10-30 17:30:47.603433] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:04.632 [2024-10-30 17:30:47.603442] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:04.632 [2024-10-30 17:30:47.603450] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:04.632 [2024-10-30 17:30:47.603459] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:04.632 [2024-10-30 17:30:47.603467] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:04.632 [2024-10-30 17:30:47.603478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.632 [2024-10-30 17:30:47.603486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:04.632 [2024-10-30 17:30:47.603495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:27:04.632 [2024-10-30 17:30:47.603502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.632 [2024-10-30 17:30:47.603586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.632 [2024-10-30 17:30:47.603595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:04.632 [2024-10-30 17:30:47.603603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:04.632 [2024-10-30 17:30:47.603610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.632 [2024-10-30 17:30:47.603716] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:04.632 [2024-10-30 17:30:47.603730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:04.632 [2024-10-30 17:30:47.603739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:04.632 [2024-10-30 17:30:47.603747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.632 [2024-10-30 17:30:47.603756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:04.632 [2024-10-30 17:30:47.603763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:04.632 [2024-10-30 17:30:47.603770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:04.632 [2024-10-30 17:30:47.603777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:04.632 [2024-10-30 17:30:47.603784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:04.632 [2024-10-30 17:30:47.603791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:04.632 [2024-10-30 17:30:47.603798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:04.632 [2024-10-30 17:30:47.603805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:04.632 [2024-10-30 17:30:47.603812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:04.632 [2024-10-30 17:30:47.603818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:04.632 [2024-10-30 17:30:47.603827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:04.632 [2024-10-30 17:30:47.603840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.632 [2024-10-30 17:30:47.603847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:04.632 [2024-10-30 17:30:47.603854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:04.632 [2024-10-30 17:30:47.603860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.632 [2024-10-30 17:30:47.603867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:04.632 [2024-10-30 17:30:47.603874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:04.632 [2024-10-30 17:30:47.603881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:04.632 [2024-10-30 17:30:47.603887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:04.632 [2024-10-30 17:30:47.603894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:04.632 [2024-10-30 17:30:47.603901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:04.632 [2024-10-30 17:30:47.603908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:04.632 [2024-10-30 17:30:47.603914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:04.632 [2024-10-30 17:30:47.603921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:04.632 [2024-10-30 17:30:47.603927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:04.632 [2024-10-30 17:30:47.603934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:04.632 [2024-10-30 17:30:47.603940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:04.632 [2024-10-30 17:30:47.603949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:04.632 [2024-10-30 17:30:47.603956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:04.632 [2024-10-30 17:30:47.603963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:04.632 [2024-10-30 17:30:47.603969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:04.632 [2024-10-30 17:30:47.603976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:04.632 [2024-10-30 17:30:47.603982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:04.632 [2024-10-30 17:30:47.603989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:04.632 [2024-10-30 17:30:47.603996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:04.632 [2024-10-30 17:30:47.604002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.632 [2024-10-30 17:30:47.604010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:04.632 [2024-10-30 17:30:47.604017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:04.632 [2024-10-30 17:30:47.604023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.632 [2024-10-30 17:30:47.604030] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:04.632 [2024-10-30 17:30:47.604038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:04.632 [2024-10-30 17:30:47.604046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:04.632 [2024-10-30 17:30:47.604054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.632 [2024-10-30 17:30:47.604063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:04.632 [2024-10-30 17:30:47.604071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:04.632 [2024-10-30 17:30:47.604079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:04.632 [2024-10-30 17:30:47.604086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:04.632 [2024-10-30 17:30:47.604094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:04.632 [2024-10-30 17:30:47.604100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:04.632 [2024-10-30 17:30:47.604109] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:04.632 [2024-10-30 17:30:47.604124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:04.632 [2024-10-30 17:30:47.604133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:04.632 [2024-10-30 17:30:47.604141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:04.632 [2024-10-30 17:30:47.604148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:04.632 [2024-10-30 17:30:47.604156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:04.632 [2024-10-30 17:30:47.604164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:04.632 [2024-10-30 17:30:47.604171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:04.632 [2024-10-30 17:30:47.604178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:04.633 [2024-10-30 17:30:47.604185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:04.633 [2024-10-30 17:30:47.604193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:04.633 [2024-10-30 17:30:47.604214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:04.633 [2024-10-30 17:30:47.604221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:04.633 [2024-10-30 17:30:47.604228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:04.633 [2024-10-30 17:30:47.604235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:04.633 [2024-10-30 17:30:47.604242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:04.633 [2024-10-30 17:30:47.604250] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:04.633 [2024-10-30 17:30:47.604258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:04.633 [2024-10-30 17:30:47.604270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:04.633 [2024-10-30 17:30:47.604278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:04.633 [2024-10-30 17:30:47.604285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:04.633 [2024-10-30 17:30:47.604293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:04.633 [2024-10-30 17:30:47.604300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.633 [2024-10-30 17:30:47.604309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:04.633 [2024-10-30 17:30:47.604317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:27:04.633 [2024-10-30 17:30:47.604332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.895 [2024-10-30 17:30:47.635780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.895 [2024-10-30 17:30:47.635831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:04.895 [2024-10-30 17:30:47.635844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.404 ms 00:27:04.895 [2024-10-30 17:30:47.635853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.895 [2024-10-30 17:30:47.635938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.895 [2024-10-30 17:30:47.635953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:04.895 [2024-10-30 17:30:47.635962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:27:04.895 [2024-10-30 17:30:47.635970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.895 [2024-10-30 17:30:47.682273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.895 [2024-10-30 17:30:47.682324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:04.895 [2024-10-30 17:30:47.682337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.246 ms 00:27:04.895 [2024-10-30 17:30:47.682347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.895 [2024-10-30 17:30:47.682396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.895 [2024-10-30 17:30:47.682406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:04.895 [2024-10-30 17:30:47.682415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:04.895 [2024-10-30 17:30:47.682427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.895 [2024-10-30 17:30:47.682983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.895 [2024-10-30 17:30:47.683016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:04.895 [2024-10-30 17:30:47.683028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:27:04.895 [2024-10-30 17:30:47.683036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.895 [2024-10-30 17:30:47.683190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.895 [2024-10-30 17:30:47.683234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:04.895 [2024-10-30 17:30:47.683244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:27:04.895 [2024-10-30 17:30:47.683252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.895 [2024-10-30 17:30:47.698701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.895 [2024-10-30 17:30:47.698742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:04.895 [2024-10-30 17:30:47.698754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.421 ms 00:27:04.895 [2024-10-30 17:30:47.698765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.895 [2024-10-30 17:30:47.713042] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:04.895 [2024-10-30 17:30:47.713093] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:04.895 [2024-10-30 17:30:47.713106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.895 [2024-10-30 17:30:47.713114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:04.895 [2024-10-30 17:30:47.713123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.232 ms 00:27:04.895 [2024-10-30 17:30:47.713131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.895 [2024-10-30 17:30:47.742456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.895 [2024-10-30 17:30:47.742515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:04.895 [2024-10-30 17:30:47.742536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.271 ms 00:27:04.895 [2024-10-30 17:30:47.742544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.895 [2024-10-30 17:30:47.755281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.895 [2024-10-30 17:30:47.755333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:04.895 [2024-10-30 17:30:47.755345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.684 ms 00:27:04.895 [2024-10-30 17:30:47.755353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.895 [2024-10-30 17:30:47.767604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.895 [2024-10-30 17:30:47.767647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:04.895 [2024-10-30 17:30:47.767659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.205 ms 00:27:04.895 [2024-10-30 17:30:47.767666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.895 [2024-10-30 17:30:47.768326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.895 [2024-10-30 17:30:47.768350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:04.895 [2024-10-30 17:30:47.768361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:27:04.895 [2024-10-30 17:30:47.768368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.895 [2024-10-30 17:30:47.835302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.895 [2024-10-30 17:30:47.835361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:04.895 [2024-10-30 17:30:47.835377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.914 ms 00:27:04.896 [2024-10-30 17:30:47.835386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.896 [2024-10-30 17:30:47.846789] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:04.896 [2024-10-30 17:30:47.849874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.896 [2024-10-30 17:30:47.849917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:04.896 [2024-10-30 17:30:47.849930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.428 ms 00:27:04.896 [2024-10-30 17:30:47.849941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.896 [2024-10-30 17:30:47.850024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.896 [2024-10-30 17:30:47.850035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:04.896 [2024-10-30 17:30:47.850045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:04.896 [2024-10-30 17:30:47.850054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.896 [2024-10-30 17:30:47.850124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.896 [2024-10-30 17:30:47.850139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:04.896 [2024-10-30 17:30:47.850148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:04.896 [2024-10-30 17:30:47.850156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.896 [2024-10-30 17:30:47.850178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.896 [2024-10-30 17:30:47.850188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:04.896 [2024-10-30 17:30:47.850196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:04.896 [2024-10-30 17:30:47.850229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.896 [2024-10-30 17:30:47.850264] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:04.896 [2024-10-30 17:30:47.850276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.896 [2024-10-30 17:30:47.850285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:04.896 [2024-10-30 17:30:47.850296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:04.896 [2024-10-30 17:30:47.850304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.158 [2024-10-30 17:30:47.875960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.158 [2024-10-30 17:30:47.876008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:05.158 [2024-10-30 17:30:47.876022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.636 ms 00:27:05.158 [2024-10-30 17:30:47.876030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.158 [2024-10-30 17:30:47.876122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.158 [2024-10-30 17:30:47.876133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:05.158 [2024-10-30 17:30:47.876143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:27:05.158 [2024-10-30 17:30:47.876152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.158 [2024-10-30 17:30:47.877416] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 303.665 ms, result 0 00:27:06.103  [2024-10-30T17:30:50.027Z] Copying: 16/1024 [MB] (16 MBps) [2024-10-30T17:30:50.969Z] Copying: 33/1024 [MB] (17 MBps) [2024-10-30T17:30:51.910Z] Copying: 54/1024 [MB] (20 MBps) [2024-10-30T17:30:53.296Z] Copying: 67/1024 [MB] (13 MBps) [2024-10-30T17:30:54.240Z] Copying: 95/1024 [MB] (27 MBps) [2024-10-30T17:30:55.182Z] Copying: 114/1024 [MB] (19 MBps) [2024-10-30T17:30:56.119Z] Copying: 128/1024 [MB] (13 MBps) [2024-10-30T17:30:57.060Z] Copying: 150/1024 [MB] (21 MBps) [2024-10-30T17:30:58.003Z] Copying: 175/1024 [MB] (24 MBps) [2024-10-30T17:30:58.947Z] Copying: 194/1024 [MB] (18 MBps) [2024-10-30T17:31:00.330Z] Copying: 214/1024 [MB] (20 MBps) [2024-10-30T17:31:00.902Z] Copying: 233/1024 [MB] (19 MBps) [2024-10-30T17:31:02.288Z] Copying: 252/1024 [MB] (18 MBps) [2024-10-30T17:31:03.236Z] Copying: 271/1024 [MB] (18 MBps) [2024-10-30T17:31:04.184Z] Copying: 289/1024 [MB] (18 MBps) [2024-10-30T17:31:05.204Z] Copying: 315/1024 [MB] (26 MBps) [2024-10-30T17:31:06.149Z] Copying: 333/1024 [MB] (17 MBps) [2024-10-30T17:31:07.095Z] Copying: 346/1024 [MB] (13 MBps) [2024-10-30T17:31:08.040Z] Copying: 361/1024 [MB] (14 MBps) [2024-10-30T17:31:08.986Z] Copying: 378/1024 [MB] (17 MBps) [2024-10-30T17:31:09.929Z] Copying: 393/1024 [MB] (15 MBps) [2024-10-30T17:31:11.313Z] Copying: 412/1024 [MB] (18 MBps) [2024-10-30T17:31:12.257Z] Copying: 431/1024 [MB] (19 MBps) [2024-10-30T17:31:13.202Z] Copying: 446/1024 [MB] (15 MBps) [2024-10-30T17:31:14.148Z] Copying: 463/1024 [MB] (17 MBps) [2024-10-30T17:31:15.092Z] Copying: 477/1024 [MB] (13 MBps) [2024-10-30T17:31:16.034Z] Copying: 489/1024 [MB] (12 MBps) [2024-10-30T17:31:17.084Z] Copying: 503/1024 [MB] (13 MBps) [2024-10-30T17:31:18.027Z] Copying: 525/1024 [MB] (21 MBps) [2024-10-30T17:31:18.972Z] Copying: 545/1024 [MB] (19 MBps) [2024-10-30T17:31:19.915Z] Copying: 559/1024 [MB] (14 MBps) [2024-10-30T17:31:21.302Z] Copying: 573/1024 [MB] (13 MBps) [2024-10-30T17:31:22.246Z] Copying: 590/1024 [MB] (16 MBps) [2024-10-30T17:31:23.192Z] Copying: 616/1024 [MB] (25 MBps) [2024-10-30T17:31:24.137Z] Copying: 637/1024 [MB] (20 MBps) [2024-10-30T17:31:25.082Z] Copying: 651/1024 [MB] (14 MBps) [2024-10-30T17:31:26.027Z] Copying: 678/1024 [MB] (26 MBps) [2024-10-30T17:31:26.972Z] Copying: 697/1024 [MB] (19 MBps) [2024-10-30T17:31:27.917Z] Copying: 714/1024 [MB] (16 MBps) [2024-10-30T17:31:29.307Z] Copying: 727/1024 [MB] (13 MBps) [2024-10-30T17:31:30.248Z] Copying: 745/1024 [MB] (18 MBps) [2024-10-30T17:31:31.189Z] Copying: 769/1024 [MB] (24 MBps) [2024-10-30T17:31:32.132Z] Copying: 795/1024 [MB] (26 MBps) [2024-10-30T17:31:33.077Z] Copying: 813/1024 [MB] (17 MBps) [2024-10-30T17:31:34.019Z] Copying: 834/1024 [MB] (20 MBps) [2024-10-30T17:31:34.966Z] Copying: 848/1024 [MB] (14 MBps) [2024-10-30T17:31:35.911Z] Copying: 871/1024 [MB] (22 MBps) [2024-10-30T17:31:37.297Z] Copying: 892/1024 [MB] (21 MBps) [2024-10-30T17:31:38.238Z] Copying: 907/1024 [MB] (14 MBps) [2024-10-30T17:31:39.182Z] Copying: 927/1024 [MB] (20 MBps) [2024-10-30T17:31:40.127Z] Copying: 952/1024 [MB] (25 MBps) [2024-10-30T17:31:41.072Z] Copying: 967/1024 [MB] (14 MBps) [2024-10-30T17:31:42.016Z] Copying: 983/1024 [MB] (15 MBps) [2024-10-30T17:31:42.962Z] Copying: 997/1024 [MB] (14 MBps) [2024-10-30T17:31:42.962Z] Copying: 1023/1024 [MB] (26 MBps) [2024-10-30T17:31:42.962Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-10-30 17:31:42.903773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.981 [2024-10-30 17:31:42.903880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:59.981 [2024-10-30 17:31:42.903939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:59.981 [2024-10-30 17:31:42.903963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.981 [2024-10-30 17:31:42.903997] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:59.981 [2024-10-30 17:31:42.906639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.981 [2024-10-30 17:31:42.906737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:59.981 [2024-10-30 17:31:42.906791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.604 ms 00:27:59.981 [2024-10-30 17:31:42.906813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.981 [2024-10-30 17:31:42.909231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.981 [2024-10-30 17:31:42.909323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:59.981 [2024-10-30 17:31:42.909373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.381 ms 00:27:59.981 [2024-10-30 17:31:42.909395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.981 [2024-10-30 17:31:42.909432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.981 [2024-10-30 17:31:42.909452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:59.981 [2024-10-30 17:31:42.909471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:59.981 [2024-10-30 17:31:42.909491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.981 [2024-10-30 17:31:42.909546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.981 [2024-10-30 17:31:42.909567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:59.981 [2024-10-30 17:31:42.909629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:59.981 [2024-10-30 17:31:42.909651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.981 [2024-10-30 17:31:42.909676] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:59.981 [2024-10-30 17:31:42.909700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.909730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.909759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.909891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.909920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.909948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:59.981 [2024-10-30 17:31:42.910547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:59.982 [2024-10-30 17:31:42.910862] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:59.982 [2024-10-30 17:31:42.910870] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c85ed329-2507-423c-8fa1-d5f290e67da9 00:27:59.982 [2024-10-30 17:31:42.910877] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:59.982 [2024-10-30 17:31:42.910884] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:27:59.982 [2024-10-30 17:31:42.910891] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:59.982 [2024-10-30 17:31:42.910898] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:59.982 [2024-10-30 17:31:42.910905] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:59.982 [2024-10-30 17:31:42.910915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:59.982 [2024-10-30 17:31:42.910922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:59.982 [2024-10-30 17:31:42.910928] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:59.982 [2024-10-30 17:31:42.910935] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:59.982 [2024-10-30 17:31:42.910942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.982 [2024-10-30 17:31:42.910950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:59.982 [2024-10-30 17:31:42.910958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.266 ms 00:27:59.982 [2024-10-30 17:31:42.910965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.982 [2024-10-30 17:31:42.923044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.982 [2024-10-30 17:31:42.923075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:59.982 [2024-10-30 17:31:42.923090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.063 ms 00:27:59.982 [2024-10-30 17:31:42.923098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.982 [2024-10-30 17:31:42.923464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.982 [2024-10-30 17:31:42.923477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:59.982 [2024-10-30 17:31:42.923486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:27:59.982 [2024-10-30 17:31:42.923493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.982 [2024-10-30 17:31:42.956112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.982 [2024-10-30 17:31:42.956146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:59.982 [2024-10-30 17:31:42.956158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.982 [2024-10-30 17:31:42.956165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.982 [2024-10-30 17:31:42.956229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.982 [2024-10-30 17:31:42.956238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:59.982 [2024-10-30 17:31:42.956245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.982 [2024-10-30 17:31:42.956253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.982 [2024-10-30 17:31:42.956293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.982 [2024-10-30 17:31:42.956302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:59.982 [2024-10-30 17:31:42.956309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.982 [2024-10-30 17:31:42.956319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.982 [2024-10-30 17:31:42.956353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.982 [2024-10-30 17:31:42.956361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:59.982 [2024-10-30 17:31:42.956368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.982 [2024-10-30 17:31:42.956376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.245 [2024-10-30 17:31:43.034127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.245 [2024-10-30 17:31:43.034168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:00.245 [2024-10-30 17:31:43.034184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.245 [2024-10-30 17:31:43.034191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.245 [2024-10-30 17:31:43.097601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.245 [2024-10-30 17:31:43.097642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:00.245 [2024-10-30 17:31:43.097653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.245 [2024-10-30 17:31:43.097661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.245 [2024-10-30 17:31:43.097727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.245 [2024-10-30 17:31:43.097736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:00.245 [2024-10-30 17:31:43.097744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.245 [2024-10-30 17:31:43.097752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.245 [2024-10-30 17:31:43.097797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.245 [2024-10-30 17:31:43.097806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:00.245 [2024-10-30 17:31:43.097814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.245 [2024-10-30 17:31:43.097822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.245 [2024-10-30 17:31:43.097895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.245 [2024-10-30 17:31:43.097905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:00.245 [2024-10-30 17:31:43.097913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.245 [2024-10-30 17:31:43.097920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.245 [2024-10-30 17:31:43.097954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.245 [2024-10-30 17:31:43.097963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:00.245 [2024-10-30 17:31:43.097970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.245 [2024-10-30 17:31:43.097978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.245 [2024-10-30 17:31:43.098011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.245 [2024-10-30 17:31:43.098020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:00.245 [2024-10-30 17:31:43.098028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.245 [2024-10-30 17:31:43.098036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.245 [2024-10-30 17:31:43.098078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.245 [2024-10-30 17:31:43.098087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:00.245 [2024-10-30 17:31:43.098095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.245 [2024-10-30 17:31:43.098104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.245 [2024-10-30 17:31:43.098237] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 194.410 ms, result 0 00:28:01.189 00:28:01.189 00:28:01.189 17:31:43 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:28:01.189 [2024-10-30 17:31:43.918900] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:28:01.189 [2024-10-30 17:31:43.919048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81441 ] 00:28:01.189 [2024-10-30 17:31:44.083913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.450 [2024-10-30 17:31:44.208329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.714 [2024-10-30 17:31:44.495745] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:01.714 [2024-10-30 17:31:44.495826] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:01.714 [2024-10-30 17:31:44.656695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.714 [2024-10-30 17:31:44.656757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:01.714 [2024-10-30 17:31:44.656776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:01.714 [2024-10-30 17:31:44.656785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.714 [2024-10-30 17:31:44.656839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.714 [2024-10-30 17:31:44.656850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:01.714 [2024-10-30 17:31:44.656862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:01.714 [2024-10-30 17:31:44.656869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.714 [2024-10-30 17:31:44.656891] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:01.714 [2024-10-30 17:31:44.658004] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:01.714 [2024-10-30 17:31:44.658068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.714 [2024-10-30 17:31:44.658079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:01.714 [2024-10-30 17:31:44.658090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.182 ms 00:28:01.714 [2024-10-30 17:31:44.658098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.714 [2024-10-30 17:31:44.658590] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:28:01.714 [2024-10-30 17:31:44.658632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.714 [2024-10-30 17:31:44.658641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:01.714 [2024-10-30 17:31:44.658658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:28:01.714 [2024-10-30 17:31:44.658666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.714 [2024-10-30 17:31:44.658720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.714 [2024-10-30 17:31:44.658730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:01.714 [2024-10-30 17:31:44.658739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:01.714 [2024-10-30 17:31:44.658747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.714 [2024-10-30 17:31:44.659033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.714 [2024-10-30 17:31:44.659054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:01.714 [2024-10-30 17:31:44.659065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:28:01.714 [2024-10-30 17:31:44.659073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.714 [2024-10-30 17:31:44.659141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.714 [2024-10-30 17:31:44.659150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:01.714 [2024-10-30 17:31:44.659159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:01.714 [2024-10-30 17:31:44.659166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.714 [2024-10-30 17:31:44.659189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.714 [2024-10-30 17:31:44.659216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:01.714 [2024-10-30 17:31:44.659226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:01.714 [2024-10-30 17:31:44.659237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.714 [2024-10-30 17:31:44.659256] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:01.714 [2024-10-30 17:31:44.663453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.714 [2024-10-30 17:31:44.663495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:01.714 [2024-10-30 17:31:44.663506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.203 ms 00:28:01.714 [2024-10-30 17:31:44.663515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.714 [2024-10-30 17:31:44.663554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.714 [2024-10-30 17:31:44.663564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:01.714 [2024-10-30 17:31:44.663573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:01.714 [2024-10-30 17:31:44.663581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.714 [2024-10-30 17:31:44.663640] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:01.714 [2024-10-30 17:31:44.663666] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:01.714 [2024-10-30 17:31:44.663708] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:01.714 [2024-10-30 17:31:44.663726] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:01.715 [2024-10-30 17:31:44.663833] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:01.715 [2024-10-30 17:31:44.663845] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:01.715 [2024-10-30 17:31:44.663857] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:01.715 [2024-10-30 17:31:44.663868] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:01.715 [2024-10-30 17:31:44.663878] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:01.715 [2024-10-30 17:31:44.663888] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:01.715 [2024-10-30 17:31:44.663897] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:01.715 [2024-10-30 17:31:44.663909] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:01.715 [2024-10-30 17:31:44.663917] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:01.715 [2024-10-30 17:31:44.663925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.715 [2024-10-30 17:31:44.663933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:01.715 [2024-10-30 17:31:44.663941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:28:01.715 [2024-10-30 17:31:44.663949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.715 [2024-10-30 17:31:44.664030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.715 [2024-10-30 17:31:44.664039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:01.715 [2024-10-30 17:31:44.664047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:01.715 [2024-10-30 17:31:44.664054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.715 [2024-10-30 17:31:44.664158] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:01.715 [2024-10-30 17:31:44.664168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:01.715 [2024-10-30 17:31:44.664177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:01.715 [2024-10-30 17:31:44.664185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.715 [2024-10-30 17:31:44.664192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:01.715 [2024-10-30 17:31:44.664227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:01.715 [2024-10-30 17:31:44.664235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:01.715 [2024-10-30 17:31:44.664243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:01.715 [2024-10-30 17:31:44.664251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:01.715 [2024-10-30 17:31:44.664258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:01.715 [2024-10-30 17:31:44.664267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:01.715 [2024-10-30 17:31:44.664274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:01.715 [2024-10-30 17:31:44.664281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:01.715 [2024-10-30 17:31:44.664288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:01.715 [2024-10-30 17:31:44.664295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:01.715 [2024-10-30 17:31:44.664302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.715 [2024-10-30 17:31:44.664309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:01.715 [2024-10-30 17:31:44.664322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:01.715 [2024-10-30 17:31:44.664329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.715 [2024-10-30 17:31:44.664335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:01.715 [2024-10-30 17:31:44.664343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:01.715 [2024-10-30 17:31:44.664350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:01.715 [2024-10-30 17:31:44.664356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:01.715 [2024-10-30 17:31:44.664363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:01.715 [2024-10-30 17:31:44.664370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:01.715 [2024-10-30 17:31:44.664377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:01.715 [2024-10-30 17:31:44.664383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:01.715 [2024-10-30 17:31:44.664389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:01.715 [2024-10-30 17:31:44.664396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:01.715 [2024-10-30 17:31:44.664404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:01.715 [2024-10-30 17:31:44.664410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:01.715 [2024-10-30 17:31:44.664417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:01.715 [2024-10-30 17:31:44.664423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:01.715 [2024-10-30 17:31:44.664429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:01.715 [2024-10-30 17:31:44.664436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:01.715 [2024-10-30 17:31:44.664442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:01.715 [2024-10-30 17:31:44.664449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:01.715 [2024-10-30 17:31:44.664455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:01.715 [2024-10-30 17:31:44.664461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:01.715 [2024-10-30 17:31:44.664468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.715 [2024-10-30 17:31:44.664474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:01.715 [2024-10-30 17:31:44.664481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:01.715 [2024-10-30 17:31:44.664490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.715 [2024-10-30 17:31:44.664497] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:01.715 [2024-10-30 17:31:44.664506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:01.715 [2024-10-30 17:31:44.664514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:01.715 [2024-10-30 17:31:44.664521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.715 [2024-10-30 17:31:44.664529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:01.715 [2024-10-30 17:31:44.664536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:01.715 [2024-10-30 17:31:44.664544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:01.715 [2024-10-30 17:31:44.664552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:01.715 [2024-10-30 17:31:44.664559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:01.715 [2024-10-30 17:31:44.664566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:01.715 [2024-10-30 17:31:44.664575] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:01.715 [2024-10-30 17:31:44.664584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:01.715 [2024-10-30 17:31:44.664595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:01.715 [2024-10-30 17:31:44.664603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:01.715 [2024-10-30 17:31:44.664610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:01.715 [2024-10-30 17:31:44.664618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:01.715 [2024-10-30 17:31:44.664625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:01.715 [2024-10-30 17:31:44.664632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:01.715 [2024-10-30 17:31:44.664640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:01.715 [2024-10-30 17:31:44.664647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:01.715 [2024-10-30 17:31:44.664654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:01.715 [2024-10-30 17:31:44.664661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:01.715 [2024-10-30 17:31:44.664668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:01.715 [2024-10-30 17:31:44.664675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:01.715 [2024-10-30 17:31:44.664682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:01.715 [2024-10-30 17:31:44.664689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:01.715 [2024-10-30 17:31:44.664696] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:01.715 [2024-10-30 17:31:44.664704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:01.715 [2024-10-30 17:31:44.664714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:01.715 [2024-10-30 17:31:44.664721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:01.715 [2024-10-30 17:31:44.664727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:01.715 [2024-10-30 17:31:44.664737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:01.715 [2024-10-30 17:31:44.664745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.715 [2024-10-30 17:31:44.664754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:01.715 [2024-10-30 17:31:44.664762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:28:01.715 [2024-10-30 17:31:44.664769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.715 [2024-10-30 17:31:44.692417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.715 [2024-10-30 17:31:44.692588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:01.715 [2024-10-30 17:31:44.692657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.607 ms 00:28:01.715 [2024-10-30 17:31:44.692682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.715 [2024-10-30 17:31:44.692783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.715 [2024-10-30 17:31:44.692806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:01.715 [2024-10-30 17:31:44.692826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:28:01.715 [2024-10-30 17:31:44.692852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.978 [2024-10-30 17:31:44.735939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.978 [2024-10-30 17:31:44.736123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:01.978 [2024-10-30 17:31:44.736189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.015 ms 00:28:01.978 [2024-10-30 17:31:44.736237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.978 [2024-10-30 17:31:44.736297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.978 [2024-10-30 17:31:44.736331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:01.978 [2024-10-30 17:31:44.736357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:01.978 [2024-10-30 17:31:44.736377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.978 [2024-10-30 17:31:44.736504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.978 [2024-10-30 17:31:44.736680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:01.978 [2024-10-30 17:31:44.736706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:01.978 [2024-10-30 17:31:44.736725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.978 [2024-10-30 17:31:44.736878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.978 [2024-10-30 17:31:44.736902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:01.978 [2024-10-30 17:31:44.736983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:28:01.978 [2024-10-30 17:31:44.737006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.978 [2024-10-30 17:31:44.752277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.978 [2024-10-30 17:31:44.752433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:01.978 [2024-10-30 17:31:44.752493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.236 ms 00:28:01.978 [2024-10-30 17:31:44.752515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.978 [2024-10-30 17:31:44.752682] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:01.978 [2024-10-30 17:31:44.752839] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:01.978 [2024-10-30 17:31:44.752896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.978 [2024-10-30 17:31:44.752917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:01.978 [2024-10-30 17:31:44.752943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:28:01.978 [2024-10-30 17:31:44.753468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.978 [2024-10-30 17:31:44.765906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.978 [2024-10-30 17:31:44.766060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:01.978 [2024-10-30 17:31:44.766119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.275 ms 00:28:01.978 [2024-10-30 17:31:44.766143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.978 [2024-10-30 17:31:44.766299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.978 [2024-10-30 17:31:44.766377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:01.978 [2024-10-30 17:31:44.766402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:28:01.978 [2024-10-30 17:31:44.766421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.978 [2024-10-30 17:31:44.766560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.978 [2024-10-30 17:31:44.766590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:01.978 [2024-10-30 17:31:44.766611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:01.978 [2024-10-30 17:31:44.766630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.978 [2024-10-30 17:31:44.767268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.978 [2024-10-30 17:31:44.767376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:01.978 [2024-10-30 17:31:44.767438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:28:01.978 [2024-10-30 17:31:44.767461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.978 [2024-10-30 17:31:44.767494] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:28:01.978 [2024-10-30 17:31:44.767531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.978 [2024-10-30 17:31:44.767551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:01.978 [2024-10-30 17:31:44.767571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:28:01.978 [2024-10-30 17:31:44.767589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.978 [2024-10-30 17:31:44.779995] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:01.978 [2024-10-30 17:31:44.780279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.978 [2024-10-30 17:31:44.780314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:01.978 [2024-10-30 17:31:44.780388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.658 ms 00:28:01.978 [2024-10-30 17:31:44.780412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.978 [2024-10-30 17:31:44.782615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.978 [2024-10-30 17:31:44.782736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:01.978 [2024-10-30 17:31:44.782758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.161 ms 00:28:01.978 [2024-10-30 17:31:44.782766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.978 [2024-10-30 17:31:44.782866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.978 [2024-10-30 17:31:44.782877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:01.978 [2024-10-30 17:31:44.782887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:01.978 [2024-10-30 17:31:44.782894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.978 [2024-10-30 17:31:44.782917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.978 [2024-10-30 17:31:44.782927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:01.978 [2024-10-30 17:31:44.782939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:01.978 [2024-10-30 17:31:44.782947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.978 [2024-10-30 17:31:44.782977] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:01.978 [2024-10-30 17:31:44.782986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.978 [2024-10-30 17:31:44.782994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:01.978 [2024-10-30 17:31:44.783003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:01.978 [2024-10-30 17:31:44.783010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.978 [2024-10-30 17:31:44.809417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.978 [2024-10-30 17:31:44.809584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:01.978 [2024-10-30 17:31:44.809603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.387 ms 00:28:01.978 [2024-10-30 17:31:44.809611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.978 [2024-10-30 17:31:44.809689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.978 [2024-10-30 17:31:44.809699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:01.978 [2024-10-30 17:31:44.809708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:01.978 [2024-10-30 17:31:44.809715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.978 [2024-10-30 17:31:44.810901] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 153.747 ms, result 0 00:28:03.368  [2024-10-30T17:31:47.295Z] Copying: 19/1024 [MB] (19 MBps) [2024-10-30T17:31:48.241Z] Copying: 42/1024 [MB] (22 MBps) [2024-10-30T17:31:49.186Z] Copying: 60/1024 [MB] (18 MBps) [2024-10-30T17:31:50.130Z] Copying: 75/1024 [MB] (14 MBps) [2024-10-30T17:31:51.071Z] Copying: 97/1024 [MB] (21 MBps) [2024-10-30T17:31:52.010Z] Copying: 119/1024 [MB] (21 MBps) [2024-10-30T17:31:53.395Z] Copying: 135/1024 [MB] (16 MBps) [2024-10-30T17:31:54.341Z] Copying: 155/1024 [MB] (19 MBps) [2024-10-30T17:31:55.284Z] Copying: 180/1024 [MB] (25 MBps) [2024-10-30T17:31:56.226Z] Copying: 203/1024 [MB] (22 MBps) [2024-10-30T17:31:57.170Z] Copying: 218/1024 [MB] (15 MBps) [2024-10-30T17:31:58.116Z] Copying: 236/1024 [MB] (17 MBps) [2024-10-30T17:31:59.062Z] Copying: 252/1024 [MB] (16 MBps) [2024-10-30T17:32:00.008Z] Copying: 272/1024 [MB] (20 MBps) [2024-10-30T17:32:01.014Z] Copying: 295/1024 [MB] (22 MBps) [2024-10-30T17:32:02.402Z] Copying: 317/1024 [MB] (22 MBps) [2024-10-30T17:32:03.346Z] Copying: 339/1024 [MB] (21 MBps) [2024-10-30T17:32:04.290Z] Copying: 363/1024 [MB] (24 MBps) [2024-10-30T17:32:05.235Z] Copying: 384/1024 [MB] (21 MBps) [2024-10-30T17:32:06.180Z] Copying: 409/1024 [MB] (24 MBps) [2024-10-30T17:32:07.123Z] Copying: 426/1024 [MB] (16 MBps) [2024-10-30T17:32:08.066Z] Copying: 442/1024 [MB] (16 MBps) [2024-10-30T17:32:09.010Z] Copying: 459/1024 [MB] (17 MBps) [2024-10-30T17:32:10.397Z] Copying: 481/1024 [MB] (22 MBps) [2024-10-30T17:32:11.338Z] Copying: 502/1024 [MB] (20 MBps) [2024-10-30T17:32:12.280Z] Copying: 522/1024 [MB] (19 MBps) [2024-10-30T17:32:13.226Z] Copying: 539/1024 [MB] (16 MBps) [2024-10-30T17:32:14.171Z] Copying: 561/1024 [MB] (21 MBps) [2024-10-30T17:32:15.113Z] Copying: 577/1024 [MB] (16 MBps) [2024-10-30T17:32:16.059Z] Copying: 596/1024 [MB] (18 MBps) [2024-10-30T17:32:17.002Z] Copying: 606/1024 [MB] (10 MBps) [2024-10-30T17:32:18.388Z] Copying: 621/1024 [MB] (14 MBps) [2024-10-30T17:32:19.331Z] Copying: 637/1024 [MB] (15 MBps) [2024-10-30T17:32:20.274Z] Copying: 648/1024 [MB] (11 MBps) [2024-10-30T17:32:21.218Z] Copying: 659/1024 [MB] (10 MBps) [2024-10-30T17:32:22.164Z] Copying: 672/1024 [MB] (13 MBps) [2024-10-30T17:32:23.108Z] Copying: 686/1024 [MB] (13 MBps) [2024-10-30T17:32:24.053Z] Copying: 701/1024 [MB] (15 MBps) [2024-10-30T17:32:24.997Z] Copying: 718/1024 [MB] (16 MBps) [2024-10-30T17:32:26.384Z] Copying: 733/1024 [MB] (15 MBps) [2024-10-30T17:32:27.368Z] Copying: 754/1024 [MB] (20 MBps) [2024-10-30T17:32:28.313Z] Copying: 771/1024 [MB] (17 MBps) [2024-10-30T17:32:29.255Z] Copying: 791/1024 [MB] (19 MBps) [2024-10-30T17:32:30.204Z] Copying: 812/1024 [MB] (20 MBps) [2024-10-30T17:32:31.148Z] Copying: 842/1024 [MB] (30 MBps) [2024-10-30T17:32:32.093Z] Copying: 860/1024 [MB] (18 MBps) [2024-10-30T17:32:33.037Z] Copying: 874/1024 [MB] (14 MBps) [2024-10-30T17:32:34.424Z] Copying: 888/1024 [MB] (14 MBps) [2024-10-30T17:32:34.997Z] Copying: 900/1024 [MB] (11 MBps) [2024-10-30T17:32:36.386Z] Copying: 913/1024 [MB] (13 MBps) [2024-10-30T17:32:37.329Z] Copying: 924/1024 [MB] (11 MBps) [2024-10-30T17:32:38.272Z] Copying: 936/1024 [MB] (11 MBps) [2024-10-30T17:32:39.213Z] Copying: 951/1024 [MB] (15 MBps) [2024-10-30T17:32:40.158Z] Copying: 975/1024 [MB] (23 MBps) [2024-10-30T17:32:41.102Z] Copying: 997/1024 [MB] (22 MBps) [2024-10-30T17:32:41.363Z] Copying: 1018/1024 [MB] (20 MBps) [2024-10-30T17:32:41.627Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-10-30 17:32:41.562918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.646 [2024-10-30 17:32:41.563008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:58.646 [2024-10-30 17:32:41.563027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:58.646 [2024-10-30 17:32:41.563037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.646 [2024-10-30 17:32:41.563061] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:58.646 [2024-10-30 17:32:41.566240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.646 [2024-10-30 17:32:41.566338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:58.646 [2024-10-30 17:32:41.566350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.161 ms 00:28:58.646 [2024-10-30 17:32:41.566359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.646 [2024-10-30 17:32:41.566607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.646 [2024-10-30 17:32:41.566620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:58.646 [2024-10-30 17:32:41.566630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:28:58.646 [2024-10-30 17:32:41.566639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.646 [2024-10-30 17:32:41.566669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.646 [2024-10-30 17:32:41.566679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:58.646 [2024-10-30 17:32:41.566692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:58.646 [2024-10-30 17:32:41.566700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.646 [2024-10-30 17:32:41.566763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.647 [2024-10-30 17:32:41.566773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:58.647 [2024-10-30 17:32:41.566782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:58.647 [2024-10-30 17:32:41.566791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.647 [2024-10-30 17:32:41.566806] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:58.647 [2024-10-30 17:32:41.566819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.566998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:58.647 [2024-10-30 17:32:41.567500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:58.648 [2024-10-30 17:32:41.567507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:58.648 [2024-10-30 17:32:41.567514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:58.648 [2024-10-30 17:32:41.567522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:58.648 [2024-10-30 17:32:41.567532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:58.648 [2024-10-30 17:32:41.567540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:58.648 [2024-10-30 17:32:41.567548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:58.648 [2024-10-30 17:32:41.567556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:58.648 [2024-10-30 17:32:41.567563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:58.648 [2024-10-30 17:32:41.567571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:58.648 [2024-10-30 17:32:41.567579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:58.648 [2024-10-30 17:32:41.567587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:58.648 [2024-10-30 17:32:41.567595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:58.648 [2024-10-30 17:32:41.567602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:58.648 [2024-10-30 17:32:41.567610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:58.648 [2024-10-30 17:32:41.567618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:58.648 [2024-10-30 17:32:41.567634] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:58.648 [2024-10-30 17:32:41.567642] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c85ed329-2507-423c-8fa1-d5f290e67da9 00:28:58.648 [2024-10-30 17:32:41.567652] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:58.648 [2024-10-30 17:32:41.567660] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:28:58.648 [2024-10-30 17:32:41.567668] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:58.648 [2024-10-30 17:32:41.567676] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:58.648 [2024-10-30 17:32:41.567683] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:58.648 [2024-10-30 17:32:41.567691] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:58.648 [2024-10-30 17:32:41.567698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:58.648 [2024-10-30 17:32:41.567704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:58.648 [2024-10-30 17:32:41.567711] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:58.648 [2024-10-30 17:32:41.567718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.648 [2024-10-30 17:32:41.567725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:58.648 [2024-10-30 17:32:41.567732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.913 ms 00:28:58.648 [2024-10-30 17:32:41.567740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.648 [2024-10-30 17:32:41.582862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.648 [2024-10-30 17:32:41.582916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:58.648 [2024-10-30 17:32:41.582929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.103 ms 00:28:58.648 [2024-10-30 17:32:41.582938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.648 [2024-10-30 17:32:41.583364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.648 [2024-10-30 17:32:41.583382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:58.648 [2024-10-30 17:32:41.583392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:28:58.648 [2024-10-30 17:32:41.583400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.648 [2024-10-30 17:32:41.622213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.648 [2024-10-30 17:32:41.622282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:58.648 [2024-10-30 17:32:41.622295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.648 [2024-10-30 17:32:41.622304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.648 [2024-10-30 17:32:41.622383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.648 [2024-10-30 17:32:41.622392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:58.648 [2024-10-30 17:32:41.622400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.648 [2024-10-30 17:32:41.622409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.648 [2024-10-30 17:32:41.622483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.648 [2024-10-30 17:32:41.622494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:58.648 [2024-10-30 17:32:41.622503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.648 [2024-10-30 17:32:41.622511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.648 [2024-10-30 17:32:41.622527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.648 [2024-10-30 17:32:41.622535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:58.648 [2024-10-30 17:32:41.622543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.648 [2024-10-30 17:32:41.622551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.910 [2024-10-30 17:32:41.711016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.910 [2024-10-30 17:32:41.711084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:58.910 [2024-10-30 17:32:41.711101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.910 [2024-10-30 17:32:41.711110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.910 [2024-10-30 17:32:41.781861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.910 [2024-10-30 17:32:41.781922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:58.910 [2024-10-30 17:32:41.781935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.910 [2024-10-30 17:32:41.781945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.910 [2024-10-30 17:32:41.782045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.910 [2024-10-30 17:32:41.782056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:58.910 [2024-10-30 17:32:41.782065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.910 [2024-10-30 17:32:41.782074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.910 [2024-10-30 17:32:41.782112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.910 [2024-10-30 17:32:41.782127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:58.910 [2024-10-30 17:32:41.782136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.910 [2024-10-30 17:32:41.782144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.910 [2024-10-30 17:32:41.782262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.910 [2024-10-30 17:32:41.782278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:58.910 [2024-10-30 17:32:41.782287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.910 [2024-10-30 17:32:41.782295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.910 [2024-10-30 17:32:41.782323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.910 [2024-10-30 17:32:41.782332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:58.910 [2024-10-30 17:32:41.782340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.910 [2024-10-30 17:32:41.782349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.910 [2024-10-30 17:32:41.782393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.910 [2024-10-30 17:32:41.782408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:58.910 [2024-10-30 17:32:41.782416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.910 [2024-10-30 17:32:41.782424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.910 [2024-10-30 17:32:41.782469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.910 [2024-10-30 17:32:41.782480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:58.910 [2024-10-30 17:32:41.782488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.910 [2024-10-30 17:32:41.782496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.910 [2024-10-30 17:32:41.782630] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 219.680 ms, result 0 00:28:59.853 00:28:59.853 00:28:59.853 17:32:42 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:01.770 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:01.770 17:32:44 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:29:01.770 [2024-10-30 17:32:44.483974] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:29:01.770 [2024-10-30 17:32:44.484096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82049 ] 00:29:01.770 [2024-10-30 17:32:44.637027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.032 [2024-10-30 17:32:44.766847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.295 [2024-10-30 17:32:45.056162] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:02.295 [2024-10-30 17:32:45.056528] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:02.295 [2024-10-30 17:32:45.217456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.295 [2024-10-30 17:32:45.217522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:02.295 [2024-10-30 17:32:45.217541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:02.295 [2024-10-30 17:32:45.217550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.295 [2024-10-30 17:32:45.217609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.295 [2024-10-30 17:32:45.217619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:02.295 [2024-10-30 17:32:45.217631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:02.295 [2024-10-30 17:32:45.217639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.295 [2024-10-30 17:32:45.217659] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:02.295 [2024-10-30 17:32:45.218400] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:02.295 [2024-10-30 17:32:45.218423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.295 [2024-10-30 17:32:45.218433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:02.295 [2024-10-30 17:32:45.218442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:29:02.295 [2024-10-30 17:32:45.218450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.295 [2024-10-30 17:32:45.219126] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:02.295 [2024-10-30 17:32:45.219232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.295 [2024-10-30 17:32:45.219245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:02.295 [2024-10-30 17:32:45.219264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:29:02.295 [2024-10-30 17:32:45.219272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.295 [2024-10-30 17:32:45.219394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.295 [2024-10-30 17:32:45.219407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:02.295 [2024-10-30 17:32:45.219416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:02.295 [2024-10-30 17:32:45.219424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.295 [2024-10-30 17:32:45.219727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.295 [2024-10-30 17:32:45.219740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:02.295 [2024-10-30 17:32:45.219752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:29:02.295 [2024-10-30 17:32:45.219760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.295 [2024-10-30 17:32:45.219837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.295 [2024-10-30 17:32:45.219847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:02.295 [2024-10-30 17:32:45.219855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:29:02.295 [2024-10-30 17:32:45.219863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.295 [2024-10-30 17:32:45.219886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.295 [2024-10-30 17:32:45.219895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:02.296 [2024-10-30 17:32:45.219904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:02.296 [2024-10-30 17:32:45.219914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.296 [2024-10-30 17:32:45.219936] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:02.296 [2024-10-30 17:32:45.224542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.296 [2024-10-30 17:32:45.224721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:02.296 [2024-10-30 17:32:45.225155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.611 ms 00:29:02.296 [2024-10-30 17:32:45.225241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.296 [2024-10-30 17:32:45.225384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.296 [2024-10-30 17:32:45.225414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:02.296 [2024-10-30 17:32:45.225435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:02.296 [2024-10-30 17:32:45.225455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.296 [2024-10-30 17:32:45.225538] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:02.296 [2024-10-30 17:32:45.225660] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:02.296 [2024-10-30 17:32:45.225727] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:02.296 [2024-10-30 17:32:45.225784] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:02.296 [2024-10-30 17:32:45.225913] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:02.296 [2024-10-30 17:32:45.226012] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:02.296 [2024-10-30 17:32:45.226026] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:02.296 [2024-10-30 17:32:45.226036] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:02.296 [2024-10-30 17:32:45.226046] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:02.296 [2024-10-30 17:32:45.226054] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:02.296 [2024-10-30 17:32:45.226062] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:02.296 [2024-10-30 17:32:45.226073] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:02.296 [2024-10-30 17:32:45.226082] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:02.296 [2024-10-30 17:32:45.226092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.296 [2024-10-30 17:32:45.226100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:02.296 [2024-10-30 17:32:45.226108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:29:02.296 [2024-10-30 17:32:45.226115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.296 [2024-10-30 17:32:45.226230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.296 [2024-10-30 17:32:45.226241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:02.296 [2024-10-30 17:32:45.226249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:29:02.296 [2024-10-30 17:32:45.226256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.296 [2024-10-30 17:32:45.226366] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:02.296 [2024-10-30 17:32:45.226377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:02.296 [2024-10-30 17:32:45.226386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:02.296 [2024-10-30 17:32:45.226394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.296 [2024-10-30 17:32:45.226402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:02.296 [2024-10-30 17:32:45.226409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:02.296 [2024-10-30 17:32:45.226416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:02.296 [2024-10-30 17:32:45.226422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:02.296 [2024-10-30 17:32:45.226429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:02.296 [2024-10-30 17:32:45.226435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:02.296 [2024-10-30 17:32:45.226442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:02.296 [2024-10-30 17:32:45.226448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:02.296 [2024-10-30 17:32:45.226454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:02.296 [2024-10-30 17:32:45.226461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:02.296 [2024-10-30 17:32:45.226467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:02.296 [2024-10-30 17:32:45.226474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.296 [2024-10-30 17:32:45.226480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:02.296 [2024-10-30 17:32:45.226492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:02.296 [2024-10-30 17:32:45.226499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.296 [2024-10-30 17:32:45.226506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:02.296 [2024-10-30 17:32:45.226514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:02.296 [2024-10-30 17:32:45.226521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:02.296 [2024-10-30 17:32:45.226528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:02.296 [2024-10-30 17:32:45.226534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:02.296 [2024-10-30 17:32:45.226543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:02.296 [2024-10-30 17:32:45.226550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:02.296 [2024-10-30 17:32:45.226556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:02.296 [2024-10-30 17:32:45.226563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:02.296 [2024-10-30 17:32:45.226570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:02.296 [2024-10-30 17:32:45.226577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:02.296 [2024-10-30 17:32:45.226584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:02.296 [2024-10-30 17:32:45.226590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:02.296 [2024-10-30 17:32:45.226596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:02.296 [2024-10-30 17:32:45.226603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:02.296 [2024-10-30 17:32:45.226609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:02.296 [2024-10-30 17:32:45.226616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:02.296 [2024-10-30 17:32:45.226623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:02.296 [2024-10-30 17:32:45.226629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:02.296 [2024-10-30 17:32:45.226636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:02.296 [2024-10-30 17:32:45.226643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.296 [2024-10-30 17:32:45.226650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:02.296 [2024-10-30 17:32:45.226656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:02.296 [2024-10-30 17:32:45.226662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.296 [2024-10-30 17:32:45.226668] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:02.296 [2024-10-30 17:32:45.226676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:02.296 [2024-10-30 17:32:45.226684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:02.296 [2024-10-30 17:32:45.226691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.296 [2024-10-30 17:32:45.226698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:02.296 [2024-10-30 17:32:45.226705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:02.296 [2024-10-30 17:32:45.226711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:02.296 [2024-10-30 17:32:45.226718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:02.296 [2024-10-30 17:32:45.226724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:02.296 [2024-10-30 17:32:45.226730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:02.296 [2024-10-30 17:32:45.226739] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:02.296 [2024-10-30 17:32:45.226749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:02.296 [2024-10-30 17:32:45.226760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:02.296 [2024-10-30 17:32:45.226769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:02.296 [2024-10-30 17:32:45.226776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:02.296 [2024-10-30 17:32:45.226783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:02.296 [2024-10-30 17:32:45.226790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:02.296 [2024-10-30 17:32:45.226798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:02.296 [2024-10-30 17:32:45.226805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:02.296 [2024-10-30 17:32:45.226812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:02.296 [2024-10-30 17:32:45.226819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:02.296 [2024-10-30 17:32:45.226826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:02.296 [2024-10-30 17:32:45.226833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:02.296 [2024-10-30 17:32:45.226840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:02.296 [2024-10-30 17:32:45.226847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:02.296 [2024-10-30 17:32:45.226855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:02.296 [2024-10-30 17:32:45.226862] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:02.296 [2024-10-30 17:32:45.226870] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:02.296 [2024-10-30 17:32:45.226878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:02.297 [2024-10-30 17:32:45.226885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:02.297 [2024-10-30 17:32:45.226893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:02.297 [2024-10-30 17:32:45.226901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:02.297 [2024-10-30 17:32:45.226909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.297 [2024-10-30 17:32:45.226916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:02.297 [2024-10-30 17:32:45.226924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:29:02.297 [2024-10-30 17:32:45.226931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.297 [2024-10-30 17:32:45.255337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.297 [2024-10-30 17:32:45.255507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:02.297 [2024-10-30 17:32:45.255569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.362 ms 00:29:02.297 [2024-10-30 17:32:45.255592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.297 [2024-10-30 17:32:45.255699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.297 [2024-10-30 17:32:45.255723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:02.297 [2024-10-30 17:32:45.255743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:02.297 [2024-10-30 17:32:45.255768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.558 [2024-10-30 17:32:45.309939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.558 [2024-10-30 17:32:45.310143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:02.558 [2024-10-30 17:32:45.310238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.098 ms 00:29:02.558 [2024-10-30 17:32:45.310264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.558 [2024-10-30 17:32:45.310328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.558 [2024-10-30 17:32:45.310365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:02.558 [2024-10-30 17:32:45.310387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:02.558 [2024-10-30 17:32:45.310407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.558 [2024-10-30 17:32:45.310556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.558 [2024-10-30 17:32:45.310837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:02.558 [2024-10-30 17:32:45.310884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:29:02.558 [2024-10-30 17:32:45.310904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.558 [2024-10-30 17:32:45.311076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.558 [2024-10-30 17:32:45.311100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:02.558 [2024-10-30 17:32:45.311124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:29:02.558 [2024-10-30 17:32:45.311144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.558 [2024-10-30 17:32:45.327030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.558 [2024-10-30 17:32:45.327221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:02.558 [2024-10-30 17:32:45.327286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.855 ms 00:29:02.558 [2024-10-30 17:32:45.327308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.558 [2024-10-30 17:32:45.327460] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:02.558 [2024-10-30 17:32:45.327501] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:02.558 [2024-10-30 17:32:45.327533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.558 [2024-10-30 17:32:45.327553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:02.558 [2024-10-30 17:32:45.327645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:29:02.558 [2024-10-30 17:32:45.327668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.558 [2024-10-30 17:32:45.339958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.558 [2024-10-30 17:32:45.340112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:02.558 [2024-10-30 17:32:45.340176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.259 ms 00:29:02.558 [2024-10-30 17:32:45.340197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.558 [2024-10-30 17:32:45.340356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.558 [2024-10-30 17:32:45.340380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:02.558 [2024-10-30 17:32:45.340400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:29:02.558 [2024-10-30 17:32:45.340419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.558 [2024-10-30 17:32:45.340541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.558 [2024-10-30 17:32:45.340569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:02.558 [2024-10-30 17:32:45.340590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:02.558 [2024-10-30 17:32:45.340610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.558 [2024-10-30 17:32:45.341246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.558 [2024-10-30 17:32:45.341293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:02.558 [2024-10-30 17:32:45.341315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:29:02.558 [2024-10-30 17:32:45.341334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.558 [2024-10-30 17:32:45.341434] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:02.558 [2024-10-30 17:32:45.341475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.558 [2024-10-30 17:32:45.341497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:02.559 [2024-10-30 17:32:45.341517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:02.559 [2024-10-30 17:32:45.341536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.559 [2024-10-30 17:32:45.354176] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:02.559 [2024-10-30 17:32:45.354486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.559 [2024-10-30 17:32:45.354521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:02.559 [2024-10-30 17:32:45.354589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.916 ms 00:29:02.559 [2024-10-30 17:32:45.354612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.559 [2024-10-30 17:32:45.356760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.559 [2024-10-30 17:32:45.356899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:02.559 [2024-10-30 17:32:45.356923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.069 ms 00:29:02.559 [2024-10-30 17:32:45.356930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.559 [2024-10-30 17:32:45.357032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.559 [2024-10-30 17:32:45.357042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:02.559 [2024-10-30 17:32:45.357051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:02.559 [2024-10-30 17:32:45.357060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.559 [2024-10-30 17:32:45.357085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.559 [2024-10-30 17:32:45.357094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:02.559 [2024-10-30 17:32:45.357107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:02.559 [2024-10-30 17:32:45.357115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.559 [2024-10-30 17:32:45.357146] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:02.559 [2024-10-30 17:32:45.357156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.559 [2024-10-30 17:32:45.357164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:02.559 [2024-10-30 17:32:45.357172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:02.559 [2024-10-30 17:32:45.357179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.559 [2024-10-30 17:32:45.384465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.559 [2024-10-30 17:32:45.384531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:02.559 [2024-10-30 17:32:45.384545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.236 ms 00:29:02.559 [2024-10-30 17:32:45.384554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.559 [2024-10-30 17:32:45.384650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.559 [2024-10-30 17:32:45.384661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:02.559 [2024-10-30 17:32:45.384671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:02.559 [2024-10-30 17:32:45.384678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.559 [2024-10-30 17:32:45.385948] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 168.011 ms, result 0 00:29:03.503  [2024-10-30T17:32:47.429Z] Copying: 24/1024 [MB] (24 MBps) [2024-10-30T17:32:48.818Z] Copying: 41/1024 [MB] (17 MBps) [2024-10-30T17:32:49.762Z] Copying: 52/1024 [MB] (11 MBps) [2024-10-30T17:32:50.704Z] Copying: 75/1024 [MB] (22 MBps) [2024-10-30T17:32:51.644Z] Copying: 89/1024 [MB] (13 MBps) [2024-10-30T17:32:52.581Z] Copying: 104/1024 [MB] (15 MBps) [2024-10-30T17:32:53.523Z] Copying: 128/1024 [MB] (23 MBps) [2024-10-30T17:32:54.466Z] Copying: 138/1024 [MB] (10 MBps) [2024-10-30T17:32:55.410Z] Copying: 161/1024 [MB] (22 MBps) [2024-10-30T17:32:56.797Z] Copying: 184/1024 [MB] (23 MBps) [2024-10-30T17:32:57.741Z] Copying: 195/1024 [MB] (10 MBps) [2024-10-30T17:32:58.686Z] Copying: 207/1024 [MB] (12 MBps) [2024-10-30T17:32:59.628Z] Copying: 217/1024 [MB] (10 MBps) [2024-10-30T17:33:00.634Z] Copying: 252/1024 [MB] (34 MBps) [2024-10-30T17:33:01.606Z] Copying: 265/1024 [MB] (13 MBps) [2024-10-30T17:33:02.547Z] Copying: 286/1024 [MB] (20 MBps) [2024-10-30T17:33:03.488Z] Copying: 315/1024 [MB] (29 MBps) [2024-10-30T17:33:04.428Z] Copying: 333/1024 [MB] (17 MBps) [2024-10-30T17:33:05.819Z] Copying: 348/1024 [MB] (15 MBps) [2024-10-30T17:33:06.760Z] Copying: 364/1024 [MB] (16 MBps) [2024-10-30T17:33:07.702Z] Copying: 377/1024 [MB] (13 MBps) [2024-10-30T17:33:08.645Z] Copying: 393/1024 [MB] (15 MBps) [2024-10-30T17:33:09.586Z] Copying: 407/1024 [MB] (13 MBps) [2024-10-30T17:33:10.529Z] Copying: 421/1024 [MB] (14 MBps) [2024-10-30T17:33:11.474Z] Copying: 433/1024 [MB] (11 MBps) [2024-10-30T17:33:12.418Z] Copying: 447/1024 [MB] (14 MBps) [2024-10-30T17:33:13.808Z] Copying: 474/1024 [MB] (27 MBps) [2024-10-30T17:33:14.753Z] Copying: 485/1024 [MB] (11 MBps) [2024-10-30T17:33:15.695Z] Copying: 498/1024 [MB] (12 MBps) [2024-10-30T17:33:16.639Z] Copying: 509/1024 [MB] (11 MBps) [2024-10-30T17:33:17.583Z] Copying: 524/1024 [MB] (14 MBps) [2024-10-30T17:33:18.528Z] Copying: 540/1024 [MB] (15 MBps) [2024-10-30T17:33:19.472Z] Copying: 564/1024 [MB] (24 MBps) [2024-10-30T17:33:20.415Z] Copying: 580/1024 [MB] (15 MBps) [2024-10-30T17:33:21.803Z] Copying: 596/1024 [MB] (16 MBps) [2024-10-30T17:33:22.747Z] Copying: 617/1024 [MB] (21 MBps) [2024-10-30T17:33:23.691Z] Copying: 634/1024 [MB] (17 MBps) [2024-10-30T17:33:24.634Z] Copying: 662/1024 [MB] (27 MBps) [2024-10-30T17:33:25.578Z] Copying: 690/1024 [MB] (27 MBps) [2024-10-30T17:33:26.520Z] Copying: 717/1024 [MB] (27 MBps) [2024-10-30T17:33:27.464Z] Copying: 744/1024 [MB] (27 MBps) [2024-10-30T17:33:28.407Z] Copying: 764/1024 [MB] (19 MBps) [2024-10-30T17:33:29.795Z] Copying: 784/1024 [MB] (19 MBps) [2024-10-30T17:33:30.736Z] Copying: 797/1024 [MB] (13 MBps) [2024-10-30T17:33:31.726Z] Copying: 809/1024 [MB] (12 MBps) [2024-10-30T17:33:32.680Z] Copying: 824/1024 [MB] (14 MBps) [2024-10-30T17:33:33.623Z] Copying: 834/1024 [MB] (10 MBps) [2024-10-30T17:33:34.569Z] Copying: 852/1024 [MB] (18 MBps) [2024-10-30T17:33:35.514Z] Copying: 862/1024 [MB] (10 MBps) [2024-10-30T17:33:36.458Z] Copying: 872/1024 [MB] (10 MBps) [2024-10-30T17:33:37.401Z] Copying: 894/1024 [MB] (21 MBps) [2024-10-30T17:33:38.789Z] Copying: 914/1024 [MB] (20 MBps) [2024-10-30T17:33:39.732Z] Copying: 929/1024 [MB] (15 MBps) [2024-10-30T17:33:40.675Z] Copying: 946/1024 [MB] (17 MBps) [2024-10-30T17:33:41.620Z] Copying: 974/1024 [MB] (27 MBps) [2024-10-30T17:33:42.564Z] Copying: 984/1024 [MB] (10 MBps) [2024-10-30T17:33:43.508Z] Copying: 997/1024 [MB] (12 MBps) [2024-10-30T17:33:44.449Z] Copying: 1007/1024 [MB] (10 MBps) [2024-10-30T17:33:45.396Z] Copying: 1023/1024 [MB] (15 MBps) [2024-10-30T17:33:45.396Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-10-30 17:33:45.077702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.415 [2024-10-30 17:33:45.077939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:02.415 [2024-10-30 17:33:45.077967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:02.415 [2024-10-30 17:33:45.077976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.415 [2024-10-30 17:33:45.080895] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:02.415 [2024-10-30 17:33:45.085330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.415 [2024-10-30 17:33:45.085518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:02.415 [2024-10-30 17:33:45.085587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.386 ms 00:30:02.415 [2024-10-30 17:33:45.085599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.415 [2024-10-30 17:33:45.097000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.415 [2024-10-30 17:33:45.097166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:02.415 [2024-10-30 17:33:45.097195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.155 ms 00:30:02.415 [2024-10-30 17:33:45.097226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.415 [2024-10-30 17:33:45.097263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.415 [2024-10-30 17:33:45.097272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:02.415 [2024-10-30 17:33:45.097282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:02.415 [2024-10-30 17:33:45.097290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.415 [2024-10-30 17:33:45.097354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.415 [2024-10-30 17:33:45.097363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:02.415 [2024-10-30 17:33:45.097372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:30:02.415 [2024-10-30 17:33:45.097384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.415 [2024-10-30 17:33:45.097398] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:02.415 [2024-10-30 17:33:45.097411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128512 / 261120 wr_cnt: 1 state: open 00:30:02.415 [2024-10-30 17:33:45.097420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:02.415 [2024-10-30 17:33:45.097868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.097878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.097887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.097895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.097916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.097924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.097933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.097940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.097948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.097956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.097964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.097972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.097980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.097988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.097996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:02.416 [2024-10-30 17:33:45.098285] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:02.416 [2024-10-30 17:33:45.098294] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c85ed329-2507-423c-8fa1-d5f290e67da9 00:30:02.416 [2024-10-30 17:33:45.098303] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128512 00:30:02.416 [2024-10-30 17:33:45.098310] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128544 00:30:02.416 [2024-10-30 17:33:45.098318] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128512 00:30:02.416 [2024-10-30 17:33:45.098326] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:30:02.416 [2024-10-30 17:33:45.098334] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:02.416 [2024-10-30 17:33:45.098342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:02.416 [2024-10-30 17:33:45.098353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:02.416 [2024-10-30 17:33:45.098360] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:02.416 [2024-10-30 17:33:45.098367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:02.416 [2024-10-30 17:33:45.098374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.416 [2024-10-30 17:33:45.098382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:02.416 [2024-10-30 17:33:45.098390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:30:02.416 [2024-10-30 17:33:45.098398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.416 [2024-10-30 17:33:45.112014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.416 [2024-10-30 17:33:45.112179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:02.416 [2024-10-30 17:33:45.112218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.600 ms 00:30:02.416 [2024-10-30 17:33:45.112228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.416 [2024-10-30 17:33:45.112612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.416 [2024-10-30 17:33:45.112622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:02.416 [2024-10-30 17:33:45.112632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:30:02.416 [2024-10-30 17:33:45.112639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.416 [2024-10-30 17:33:45.150796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.416 [2024-10-30 17:33:45.150855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:02.416 [2024-10-30 17:33:45.150874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.416 [2024-10-30 17:33:45.150884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.416 [2024-10-30 17:33:45.150961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.416 [2024-10-30 17:33:45.150971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:02.416 [2024-10-30 17:33:45.150981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.416 [2024-10-30 17:33:45.150991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.416 [2024-10-30 17:33:45.151073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.416 [2024-10-30 17:33:45.151085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:02.416 [2024-10-30 17:33:45.151094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.416 [2024-10-30 17:33:45.151107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.416 [2024-10-30 17:33:45.151124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.416 [2024-10-30 17:33:45.151134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:02.416 [2024-10-30 17:33:45.151144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.416 [2024-10-30 17:33:45.151153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.416 [2024-10-30 17:33:45.235762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.416 [2024-10-30 17:33:45.235825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:02.416 [2024-10-30 17:33:45.235839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.416 [2024-10-30 17:33:45.235854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.416 [2024-10-30 17:33:45.304614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.416 [2024-10-30 17:33:45.304671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:02.416 [2024-10-30 17:33:45.304683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.416 [2024-10-30 17:33:45.304699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.416 [2024-10-30 17:33:45.304761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.416 [2024-10-30 17:33:45.304771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:02.416 [2024-10-30 17:33:45.304781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.416 [2024-10-30 17:33:45.304789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.416 [2024-10-30 17:33:45.304851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.416 [2024-10-30 17:33:45.304862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:02.416 [2024-10-30 17:33:45.304871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.416 [2024-10-30 17:33:45.304880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.416 [2024-10-30 17:33:45.304959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.416 [2024-10-30 17:33:45.304970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:02.417 [2024-10-30 17:33:45.304979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.417 [2024-10-30 17:33:45.304987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.417 [2024-10-30 17:33:45.305014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.417 [2024-10-30 17:33:45.305026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:02.417 [2024-10-30 17:33:45.305035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.417 [2024-10-30 17:33:45.305043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.417 [2024-10-30 17:33:45.305083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.417 [2024-10-30 17:33:45.305093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:02.417 [2024-10-30 17:33:45.305102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.417 [2024-10-30 17:33:45.305110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.417 [2024-10-30 17:33:45.305163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:02.417 [2024-10-30 17:33:45.305173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:02.417 [2024-10-30 17:33:45.305181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:02.417 [2024-10-30 17:33:45.305189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.417 [2024-10-30 17:33:45.305344] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 229.905 ms, result 0 00:30:03.802 00:30:03.802 00:30:03.802 17:33:46 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:30:03.802 [2024-10-30 17:33:46.643926] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:30:03.802 [2024-10-30 17:33:46.644074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82678 ] 00:30:04.063 [2024-10-30 17:33:46.805966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.063 [2024-10-30 17:33:46.923413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.324 [2024-10-30 17:33:47.212042] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:04.324 [2024-10-30 17:33:47.212119] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:04.587 [2024-10-30 17:33:47.373266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.587 [2024-10-30 17:33:47.373501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:04.587 [2024-10-30 17:33:47.373533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:04.587 [2024-10-30 17:33:47.373542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.587 [2024-10-30 17:33:47.373611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.587 [2024-10-30 17:33:47.373622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:04.587 [2024-10-30 17:33:47.373635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:30:04.587 [2024-10-30 17:33:47.373642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.587 [2024-10-30 17:33:47.373664] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:04.587 [2024-10-30 17:33:47.374449] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:04.587 [2024-10-30 17:33:47.374473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.587 [2024-10-30 17:33:47.374482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:04.587 [2024-10-30 17:33:47.374491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:30:04.587 [2024-10-30 17:33:47.374500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.587 [2024-10-30 17:33:47.374777] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:04.587 [2024-10-30 17:33:47.374810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.587 [2024-10-30 17:33:47.374818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:04.587 [2024-10-30 17:33:47.374832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:04.587 [2024-10-30 17:33:47.374841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.587 [2024-10-30 17:33:47.374895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.587 [2024-10-30 17:33:47.374905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:04.587 [2024-10-30 17:33:47.374913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:04.587 [2024-10-30 17:33:47.374921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.587 [2024-10-30 17:33:47.375491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.587 [2024-10-30 17:33:47.375505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:04.587 [2024-10-30 17:33:47.375521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:30:04.587 [2024-10-30 17:33:47.375529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.587 [2024-10-30 17:33:47.375598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.587 [2024-10-30 17:33:47.375608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:04.587 [2024-10-30 17:33:47.375616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:04.587 [2024-10-30 17:33:47.375624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.587 [2024-10-30 17:33:47.375646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.587 [2024-10-30 17:33:47.375655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:04.587 [2024-10-30 17:33:47.375664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:04.587 [2024-10-30 17:33:47.375674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.587 [2024-10-30 17:33:47.375692] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:04.587 [2024-10-30 17:33:47.379988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.587 [2024-10-30 17:33:47.380031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:04.587 [2024-10-30 17:33:47.380042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.300 ms 00:30:04.587 [2024-10-30 17:33:47.380049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.587 [2024-10-30 17:33:47.380091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.587 [2024-10-30 17:33:47.380099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:04.587 [2024-10-30 17:33:47.380108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:04.587 [2024-10-30 17:33:47.380115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.587 [2024-10-30 17:33:47.380172] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:04.587 [2024-10-30 17:33:47.380196] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:04.587 [2024-10-30 17:33:47.380254] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:04.587 [2024-10-30 17:33:47.380270] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:04.587 [2024-10-30 17:33:47.380388] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:04.587 [2024-10-30 17:33:47.380400] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:04.587 [2024-10-30 17:33:47.380411] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:04.587 [2024-10-30 17:33:47.380422] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:04.587 [2024-10-30 17:33:47.380431] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:04.587 [2024-10-30 17:33:47.380439] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:04.587 [2024-10-30 17:33:47.380447] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:04.587 [2024-10-30 17:33:47.380458] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:04.587 [2024-10-30 17:33:47.380466] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:04.587 [2024-10-30 17:33:47.380474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.587 [2024-10-30 17:33:47.380481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:04.587 [2024-10-30 17:33:47.380489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:30:04.587 [2024-10-30 17:33:47.380497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.587 [2024-10-30 17:33:47.380580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.587 [2024-10-30 17:33:47.380589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:04.587 [2024-10-30 17:33:47.380597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:04.587 [2024-10-30 17:33:47.380605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.587 [2024-10-30 17:33:47.380710] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:04.587 [2024-10-30 17:33:47.380720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:04.587 [2024-10-30 17:33:47.380729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:04.587 [2024-10-30 17:33:47.380736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:04.587 [2024-10-30 17:33:47.380745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:04.587 [2024-10-30 17:33:47.380752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:04.587 [2024-10-30 17:33:47.380759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:04.587 [2024-10-30 17:33:47.380766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:04.587 [2024-10-30 17:33:47.380773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:04.587 [2024-10-30 17:33:47.380780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:04.587 [2024-10-30 17:33:47.380787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:04.587 [2024-10-30 17:33:47.380796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:04.587 [2024-10-30 17:33:47.380803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:04.587 [2024-10-30 17:33:47.380810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:04.587 [2024-10-30 17:33:47.380817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:04.587 [2024-10-30 17:33:47.380824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:04.587 [2024-10-30 17:33:47.380832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:04.587 [2024-10-30 17:33:47.380844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:04.587 [2024-10-30 17:33:47.380851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:04.588 [2024-10-30 17:33:47.380858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:04.588 [2024-10-30 17:33:47.380865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:04.588 [2024-10-30 17:33:47.380872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:04.588 [2024-10-30 17:33:47.380879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:04.588 [2024-10-30 17:33:47.380886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:04.588 [2024-10-30 17:33:47.380892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:04.588 [2024-10-30 17:33:47.380899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:04.588 [2024-10-30 17:33:47.380906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:04.588 [2024-10-30 17:33:47.380912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:04.588 [2024-10-30 17:33:47.380919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:04.588 [2024-10-30 17:33:47.380926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:04.588 [2024-10-30 17:33:47.380932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:04.588 [2024-10-30 17:33:47.380939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:04.588 [2024-10-30 17:33:47.380946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:04.588 [2024-10-30 17:33:47.380953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:04.588 [2024-10-30 17:33:47.380960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:04.588 [2024-10-30 17:33:47.380967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:04.588 [2024-10-30 17:33:47.380974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:04.588 [2024-10-30 17:33:47.380981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:04.588 [2024-10-30 17:33:47.380988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:04.588 [2024-10-30 17:33:47.380994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:04.588 [2024-10-30 17:33:47.381001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:04.588 [2024-10-30 17:33:47.381008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:04.588 [2024-10-30 17:33:47.381015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:04.588 [2024-10-30 17:33:47.381024] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:04.588 [2024-10-30 17:33:47.381032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:04.588 [2024-10-30 17:33:47.381040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:04.588 [2024-10-30 17:33:47.381047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:04.588 [2024-10-30 17:33:47.381055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:04.588 [2024-10-30 17:33:47.381062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:04.588 [2024-10-30 17:33:47.381069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:04.588 [2024-10-30 17:33:47.381077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:04.588 [2024-10-30 17:33:47.381084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:04.588 [2024-10-30 17:33:47.381091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:04.588 [2024-10-30 17:33:47.381099] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:04.588 [2024-10-30 17:33:47.381109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:04.588 [2024-10-30 17:33:47.381120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:04.588 [2024-10-30 17:33:47.381128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:04.588 [2024-10-30 17:33:47.381135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:04.588 [2024-10-30 17:33:47.381142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:04.588 [2024-10-30 17:33:47.381150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:04.588 [2024-10-30 17:33:47.381157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:04.588 [2024-10-30 17:33:47.381164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:04.588 [2024-10-30 17:33:47.381171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:04.588 [2024-10-30 17:33:47.381178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:04.588 [2024-10-30 17:33:47.381186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:04.588 [2024-10-30 17:33:47.381193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:04.588 [2024-10-30 17:33:47.381213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:04.588 [2024-10-30 17:33:47.381220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:04.588 [2024-10-30 17:33:47.381227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:04.588 [2024-10-30 17:33:47.381235] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:04.588 [2024-10-30 17:33:47.381243] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:04.588 [2024-10-30 17:33:47.381251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:04.588 [2024-10-30 17:33:47.381259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:04.588 [2024-10-30 17:33:47.381266] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:04.588 [2024-10-30 17:33:47.381275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:04.588 [2024-10-30 17:33:47.381285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.588 [2024-10-30 17:33:47.381293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:04.588 [2024-10-30 17:33:47.381302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:30:04.588 [2024-10-30 17:33:47.381310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.588 [2024-10-30 17:33:47.413288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.588 [2024-10-30 17:33:47.413340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:04.588 [2024-10-30 17:33:47.413354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.936 ms 00:30:04.588 [2024-10-30 17:33:47.413363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.588 [2024-10-30 17:33:47.413461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.588 [2024-10-30 17:33:47.413470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:04.588 [2024-10-30 17:33:47.413480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:30:04.588 [2024-10-30 17:33:47.413492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.588 [2024-10-30 17:33:47.459671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.588 [2024-10-30 17:33:47.459870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:04.588 [2024-10-30 17:33:47.459894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.115 ms 00:30:04.588 [2024-10-30 17:33:47.459903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.588 [2024-10-30 17:33:47.459955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.588 [2024-10-30 17:33:47.459972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:04.588 [2024-10-30 17:33:47.459982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:04.588 [2024-10-30 17:33:47.459990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.588 [2024-10-30 17:33:47.460108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.588 [2024-10-30 17:33:47.460120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:04.588 [2024-10-30 17:33:47.460129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:30:04.588 [2024-10-30 17:33:47.460137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.588 [2024-10-30 17:33:47.460299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.588 [2024-10-30 17:33:47.460311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:04.588 [2024-10-30 17:33:47.460323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:30:04.588 [2024-10-30 17:33:47.460331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.588 [2024-10-30 17:33:47.476065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.588 [2024-10-30 17:33:47.476118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:04.588 [2024-10-30 17:33:47.476131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.712 ms 00:30:04.588 [2024-10-30 17:33:47.476139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.588 [2024-10-30 17:33:47.476522] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:30:04.588 [2024-10-30 17:33:47.476728] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:04.588 [2024-10-30 17:33:47.476749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.588 [2024-10-30 17:33:47.476758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:04.588 [2024-10-30 17:33:47.476776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:30:04.588 [2024-10-30 17:33:47.476784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.588 [2024-10-30 17:33:47.489097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.588 [2024-10-30 17:33:47.489140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:04.588 [2024-10-30 17:33:47.489152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.285 ms 00:30:04.588 [2024-10-30 17:33:47.489160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.588 [2024-10-30 17:33:47.489310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.588 [2024-10-30 17:33:47.489321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:04.588 [2024-10-30 17:33:47.489329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:30:04.588 [2024-10-30 17:33:47.489336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.588 [2024-10-30 17:33:47.489397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.588 [2024-10-30 17:33:47.489407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:04.588 [2024-10-30 17:33:47.489416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:04.588 [2024-10-30 17:33:47.489424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.589 [2024-10-30 17:33:47.490037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.589 [2024-10-30 17:33:47.490058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:04.589 [2024-10-30 17:33:47.490067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:30:04.589 [2024-10-30 17:33:47.490074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.589 [2024-10-30 17:33:47.490091] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:04.589 [2024-10-30 17:33:47.490104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.589 [2024-10-30 17:33:47.490113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:04.589 [2024-10-30 17:33:47.490121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:04.589 [2024-10-30 17:33:47.490129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.589 [2024-10-30 17:33:47.503039] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:04.589 [2024-10-30 17:33:47.503229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.589 [2024-10-30 17:33:47.503242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:04.589 [2024-10-30 17:33:47.503252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.080 ms 00:30:04.589 [2024-10-30 17:33:47.503260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.589 [2024-10-30 17:33:47.505510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.589 [2024-10-30 17:33:47.505660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:04.589 [2024-10-30 17:33:47.505683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.226 ms 00:30:04.589 [2024-10-30 17:33:47.505692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.589 [2024-10-30 17:33:47.505803] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:30:04.589 [2024-10-30 17:33:47.506300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.589 [2024-10-30 17:33:47.506311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:04.589 [2024-10-30 17:33:47.506322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:30:04.589 [2024-10-30 17:33:47.506330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.589 [2024-10-30 17:33:47.506359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.589 [2024-10-30 17:33:47.506374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:04.589 [2024-10-30 17:33:47.506383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:04.589 [2024-10-30 17:33:47.506390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.589 [2024-10-30 17:33:47.506424] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:04.589 [2024-10-30 17:33:47.506435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.589 [2024-10-30 17:33:47.506443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:04.589 [2024-10-30 17:33:47.506451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:04.589 [2024-10-30 17:33:47.506459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.589 [2024-10-30 17:33:47.533571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.589 [2024-10-30 17:33:47.533626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:04.589 [2024-10-30 17:33:47.533640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.093 ms 00:30:04.589 [2024-10-30 17:33:47.533648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.589 [2024-10-30 17:33:47.533749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.589 [2024-10-30 17:33:47.533760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:04.589 [2024-10-30 17:33:47.533769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:30:04.589 [2024-10-30 17:33:47.533778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.589 [2024-10-30 17:33:47.535156] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 161.419 ms, result 0 00:30:05.978  [2024-10-30T17:33:49.902Z] Copying: 13/1024 [MB] (13 MBps) [2024-10-30T17:33:50.845Z] Copying: 36/1024 [MB] (22 MBps) [2024-10-30T17:33:51.790Z] Copying: 60/1024 [MB] (23 MBps) [2024-10-30T17:33:52.729Z] Copying: 76/1024 [MB] (16 MBps) [2024-10-30T17:33:54.108Z] Copying: 93/1024 [MB] (16 MBps) [2024-10-30T17:33:55.050Z] Copying: 110/1024 [MB] (17 MBps) [2024-10-30T17:33:55.996Z] Copying: 124/1024 [MB] (14 MBps) [2024-10-30T17:33:56.939Z] Copying: 147/1024 [MB] (23 MBps) [2024-10-30T17:33:57.883Z] Copying: 167/1024 [MB] (19 MBps) [2024-10-30T17:33:58.827Z] Copying: 193/1024 [MB] (26 MBps) [2024-10-30T17:33:59.773Z] Copying: 216/1024 [MB] (22 MBps) [2024-10-30T17:34:01.160Z] Copying: 235/1024 [MB] (18 MBps) [2024-10-30T17:34:01.731Z] Copying: 257/1024 [MB] (22 MBps) [2024-10-30T17:34:03.134Z] Copying: 276/1024 [MB] (18 MBps) [2024-10-30T17:34:03.746Z] Copying: 296/1024 [MB] (20 MBps) [2024-10-30T17:34:05.133Z] Copying: 313/1024 [MB] (17 MBps) [2024-10-30T17:34:06.076Z] Copying: 332/1024 [MB] (19 MBps) [2024-10-30T17:34:07.018Z] Copying: 352/1024 [MB] (19 MBps) [2024-10-30T17:34:07.963Z] Copying: 375/1024 [MB] (22 MBps) [2024-10-30T17:34:08.907Z] Copying: 392/1024 [MB] (17 MBps) [2024-10-30T17:34:09.852Z] Copying: 409/1024 [MB] (16 MBps) [2024-10-30T17:34:10.796Z] Copying: 423/1024 [MB] (14 MBps) [2024-10-30T17:34:11.741Z] Copying: 441/1024 [MB] (18 MBps) [2024-10-30T17:34:13.126Z] Copying: 461/1024 [MB] (20 MBps) [2024-10-30T17:34:14.070Z] Copying: 475/1024 [MB] (13 MBps) [2024-10-30T17:34:15.015Z] Copying: 492/1024 [MB] (17 MBps) [2024-10-30T17:34:15.958Z] Copying: 510/1024 [MB] (18 MBps) [2024-10-30T17:34:16.901Z] Copying: 521/1024 [MB] (10 MBps) [2024-10-30T17:34:17.847Z] Copying: 532/1024 [MB] (10 MBps) [2024-10-30T17:34:18.792Z] Copying: 547/1024 [MB] (14 MBps) [2024-10-30T17:34:19.735Z] Copying: 562/1024 [MB] (15 MBps) [2024-10-30T17:34:21.119Z] Copying: 581/1024 [MB] (18 MBps) [2024-10-30T17:34:22.062Z] Copying: 599/1024 [MB] (17 MBps) [2024-10-30T17:34:23.006Z] Copying: 622/1024 [MB] (22 MBps) [2024-10-30T17:34:23.949Z] Copying: 641/1024 [MB] (19 MBps) [2024-10-30T17:34:24.894Z] Copying: 657/1024 [MB] (16 MBps) [2024-10-30T17:34:25.838Z] Copying: 672/1024 [MB] (15 MBps) [2024-10-30T17:34:26.782Z] Copying: 690/1024 [MB] (17 MBps) [2024-10-30T17:34:27.726Z] Copying: 707/1024 [MB] (17 MBps) [2024-10-30T17:34:29.113Z] Copying: 730/1024 [MB] (22 MBps) [2024-10-30T17:34:30.057Z] Copying: 745/1024 [MB] (14 MBps) [2024-10-30T17:34:31.000Z] Copying: 758/1024 [MB] (13 MBps) [2024-10-30T17:34:31.943Z] Copying: 776/1024 [MB] (17 MBps) [2024-10-30T17:34:32.890Z] Copying: 797/1024 [MB] (21 MBps) [2024-10-30T17:34:33.838Z] Copying: 819/1024 [MB] (21 MBps) [2024-10-30T17:34:34.788Z] Copying: 835/1024 [MB] (16 MBps) [2024-10-30T17:34:35.818Z] Copying: 850/1024 [MB] (14 MBps) [2024-10-30T17:34:36.762Z] Copying: 864/1024 [MB] (13 MBps) [2024-10-30T17:34:38.148Z] Copying: 875/1024 [MB] (11 MBps) [2024-10-30T17:34:39.092Z] Copying: 902/1024 [MB] (26 MBps) [2024-10-30T17:34:40.037Z] Copying: 912/1024 [MB] (10 MBps) [2024-10-30T17:34:40.979Z] Copying: 925/1024 [MB] (12 MBps) [2024-10-30T17:34:41.923Z] Copying: 948/1024 [MB] (22 MBps) [2024-10-30T17:34:42.867Z] Copying: 964/1024 [MB] (16 MBps) [2024-10-30T17:34:43.812Z] Copying: 981/1024 [MB] (16 MBps) [2024-10-30T17:34:44.756Z] Copying: 992/1024 [MB] (10 MBps) [2024-10-30T17:34:45.329Z] Copying: 1013/1024 [MB] (20 MBps) [2024-10-30T17:34:45.903Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-10-30 17:34:45.693780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.922 [2024-10-30 17:34:45.693861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:02.922 [2024-10-30 17:34:45.693877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:02.922 [2024-10-30 17:34:45.693886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.922 [2024-10-30 17:34:45.693909] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:02.922 [2024-10-30 17:34:45.696799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.922 [2024-10-30 17:34:45.696837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:02.922 [2024-10-30 17:34:45.696848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.874 ms 00:31:02.922 [2024-10-30 17:34:45.696857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.922 [2024-10-30 17:34:45.697095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.922 [2024-10-30 17:34:45.697152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:02.922 [2024-10-30 17:34:45.697161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:31:02.922 [2024-10-30 17:34:45.697170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.922 [2024-10-30 17:34:45.697209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.922 [2024-10-30 17:34:45.697218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:02.922 [2024-10-30 17:34:45.697227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:02.922 [2024-10-30 17:34:45.697235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.922 [2024-10-30 17:34:45.697287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.922 [2024-10-30 17:34:45.697296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:02.922 [2024-10-30 17:34:45.697306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:02.922 [2024-10-30 17:34:45.697314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.922 [2024-10-30 17:34:45.697327] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:02.922 [2024-10-30 17:34:45.697339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:31:02.922 [2024-10-30 17:34:45.697349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:02.922 [2024-10-30 17:34:45.697357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:02.922 [2024-10-30 17:34:45.697364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:02.922 [2024-10-30 17:34:45.697372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:02.922 [2024-10-30 17:34:45.697379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:02.922 [2024-10-30 17:34:45.697386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:02.922 [2024-10-30 17:34:45.697393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:02.922 [2024-10-30 17:34:45.697400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:02.922 [2024-10-30 17:34:45.697407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:02.922 [2024-10-30 17:34:45.697415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:02.922 [2024-10-30 17:34:45.697423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:02.922 [2024-10-30 17:34:45.697430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:02.922 [2024-10-30 17:34:45.697437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:02.922 [2024-10-30 17:34:45.697444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:02.922 [2024-10-30 17:34:45.697452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:02.922 [2024-10-30 17:34:45.697460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:02.922 [2024-10-30 17:34:45.697467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.697474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.697482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.697489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.697497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.697504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.697511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.697518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.697525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.697532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.697539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.697547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.697554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.697561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.697569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.697593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:02.923 [2024-10-30 17:34:45.698916] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:02.923 [2024-10-30 17:34:45.698924] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c85ed329-2507-423c-8fa1-d5f290e67da9 00:31:02.923 [2024-10-30 17:34:45.698931] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:31:02.923 [2024-10-30 17:34:45.698940] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2592 00:31:02.923 [2024-10-30 17:34:45.698947] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2560 00:31:02.923 [2024-10-30 17:34:45.698955] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:31:02.923 [2024-10-30 17:34:45.698962] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:02.923 [2024-10-30 17:34:45.698973] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:02.923 [2024-10-30 17:34:45.698980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:02.923 [2024-10-30 17:34:45.698986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:02.923 [2024-10-30 17:34:45.698992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:02.923 [2024-10-30 17:34:45.698999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.923 [2024-10-30 17:34:45.699009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:02.924 [2024-10-30 17:34:45.699016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:31:02.924 [2024-10-30 17:34:45.699023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.924 [2024-10-30 17:34:45.712621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.924 [2024-10-30 17:34:45.712662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:02.924 [2024-10-30 17:34:45.712673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.581 ms 00:31:02.924 [2024-10-30 17:34:45.712687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.924 [2024-10-30 17:34:45.713055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.924 [2024-10-30 17:34:45.713074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:02.924 [2024-10-30 17:34:45.713083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:31:02.924 [2024-10-30 17:34:45.713090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.924 [2024-10-30 17:34:45.749619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.924 [2024-10-30 17:34:45.749669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:02.924 [2024-10-30 17:34:45.749685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.924 [2024-10-30 17:34:45.749694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.924 [2024-10-30 17:34:45.749784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.924 [2024-10-30 17:34:45.749796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:02.924 [2024-10-30 17:34:45.749806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.924 [2024-10-30 17:34:45.749815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.924 [2024-10-30 17:34:45.749879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.924 [2024-10-30 17:34:45.749890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:02.924 [2024-10-30 17:34:45.749900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.924 [2024-10-30 17:34:45.749913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.924 [2024-10-30 17:34:45.749930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.924 [2024-10-30 17:34:45.749940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:02.924 [2024-10-30 17:34:45.749948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.924 [2024-10-30 17:34:45.749956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.924 [2024-10-30 17:34:45.834825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.924 [2024-10-30 17:34:45.834882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:02.924 [2024-10-30 17:34:45.834902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.924 [2024-10-30 17:34:45.834911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.184 [2024-10-30 17:34:45.904176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.184 [2024-10-30 17:34:45.904242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:03.184 [2024-10-30 17:34:45.904262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.184 [2024-10-30 17:34:45.904271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.184 [2024-10-30 17:34:45.904353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.184 [2024-10-30 17:34:45.904363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:03.184 [2024-10-30 17:34:45.904373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.184 [2024-10-30 17:34:45.904381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.184 [2024-10-30 17:34:45.904424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.184 [2024-10-30 17:34:45.904434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:03.184 [2024-10-30 17:34:45.904442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.184 [2024-10-30 17:34:45.904450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.184 [2024-10-30 17:34:45.904531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.184 [2024-10-30 17:34:45.904541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:03.184 [2024-10-30 17:34:45.904550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.184 [2024-10-30 17:34:45.904558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.184 [2024-10-30 17:34:45.904587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.184 [2024-10-30 17:34:45.904597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:03.184 [2024-10-30 17:34:45.904605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.184 [2024-10-30 17:34:45.904612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.184 [2024-10-30 17:34:45.904653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.184 [2024-10-30 17:34:45.904662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:03.184 [2024-10-30 17:34:45.904670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.184 [2024-10-30 17:34:45.904679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.184 [2024-10-30 17:34:45.904727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.184 [2024-10-30 17:34:45.904737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:03.184 [2024-10-30 17:34:45.904746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.184 [2024-10-30 17:34:45.904754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.184 [2024-10-30 17:34:45.904882] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 211.066 ms, result 0 00:31:03.755 00:31:03.755 00:31:03.755 17:34:46 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:06.303 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:06.304 17:34:48 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:06.304 17:34:48 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:31:06.304 17:34:48 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:06.304 17:34:48 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:06.304 17:34:48 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:06.304 17:34:48 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 80635 00:31:06.304 17:34:48 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # '[' -z 80635 ']' 00:31:06.304 17:34:48 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # kill -0 80635 00:31:06.304 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (80635) - No such process 00:31:06.304 17:34:48 ftl.ftl_restore_fast -- common/autotest_common.sh@979 -- # echo 'Process with pid 80635 is not found' 00:31:06.304 Process with pid 80635 is not found 00:31:06.304 17:34:48 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:31:06.304 Remove shared memory files 00:31:06.304 17:34:48 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:06.304 17:34:48 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:31:06.304 17:34:48 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_c85ed329-2507-423c-8fa1-d5f290e67da9_band_md /dev/hugepages/ftl_c85ed329-2507-423c-8fa1-d5f290e67da9_l2p_l1 /dev/hugepages/ftl_c85ed329-2507-423c-8fa1-d5f290e67da9_l2p_l2 /dev/hugepages/ftl_c85ed329-2507-423c-8fa1-d5f290e67da9_l2p_l2_ctx /dev/hugepages/ftl_c85ed329-2507-423c-8fa1-d5f290e67da9_nvc_md /dev/hugepages/ftl_c85ed329-2507-423c-8fa1-d5f290e67da9_p2l_pool /dev/hugepages/ftl_c85ed329-2507-423c-8fa1-d5f290e67da9_sb /dev/hugepages/ftl_c85ed329-2507-423c-8fa1-d5f290e67da9_sb_shm /dev/hugepages/ftl_c85ed329-2507-423c-8fa1-d5f290e67da9_trim_bitmap /dev/hugepages/ftl_c85ed329-2507-423c-8fa1-d5f290e67da9_trim_log /dev/hugepages/ftl_c85ed329-2507-423c-8fa1-d5f290e67da9_trim_md /dev/hugepages/ftl_c85ed329-2507-423c-8fa1-d5f290e67da9_vmap 00:31:06.304 17:34:48 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:31:06.304 17:34:48 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:06.304 17:34:48 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:31:06.304 00:31:06.304 real 4m22.977s 00:31:06.304 user 4m10.766s 00:31:06.304 sys 0m12.037s 00:31:06.304 17:34:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:06.304 17:34:48 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:06.304 ************************************ 00:31:06.304 END TEST ftl_restore_fast 00:31:06.304 ************************************ 00:31:06.304 Process with pid 72118 is not found 00:31:06.304 17:34:48 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:31:06.304 17:34:48 ftl -- ftl/ftl.sh@14 -- # killprocess 72118 00:31:06.304 17:34:48 ftl -- common/autotest_common.sh@952 -- # '[' -z 72118 ']' 00:31:06.304 17:34:48 ftl -- common/autotest_common.sh@956 -- # kill -0 72118 00:31:06.304 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (72118) - No such process 00:31:06.304 17:34:48 ftl -- common/autotest_common.sh@979 -- # echo 'Process with pid 72118 is not found' 00:31:06.304 17:34:48 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:31:06.304 17:34:48 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=83325 00:31:06.304 17:34:48 ftl -- ftl/ftl.sh@20 -- # waitforlisten 83325 00:31:06.304 17:34:48 ftl -- common/autotest_common.sh@833 -- # '[' -z 83325 ']' 00:31:06.304 17:34:48 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:06.304 17:34:48 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:06.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:06.304 17:34:48 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:06.304 17:34:48 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:06.304 17:34:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:06.304 17:34:48 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:06.304 [2024-10-30 17:34:48.984564] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 24.03.0 initialization... 00:31:06.304 [2024-10-30 17:34:48.984689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83325 ] 00:31:06.304 [2024-10-30 17:34:49.146831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.304 [2024-10-30 17:34:49.264911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.245 17:34:49 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:07.245 17:34:49 ftl -- common/autotest_common.sh@866 -- # return 0 00:31:07.245 17:34:49 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:07.245 nvme0n1 00:31:07.506 17:34:50 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:31:07.506 17:34:50 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:07.506 17:34:50 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:07.506 17:34:50 ftl -- ftl/common.sh@28 -- # stores=d795f756-83f3-46fd-8815-0cbacccc1f9a 00:31:07.506 17:34:50 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:31:07.506 17:34:50 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d795f756-83f3-46fd-8815-0cbacccc1f9a 00:31:07.767 17:34:50 ftl -- ftl/ftl.sh@23 -- # killprocess 83325 00:31:07.767 17:34:50 ftl -- common/autotest_common.sh@952 -- # '[' -z 83325 ']' 00:31:07.767 17:34:50 ftl -- common/autotest_common.sh@956 -- # kill -0 83325 00:31:07.767 17:34:50 ftl -- common/autotest_common.sh@957 -- # uname 00:31:07.767 17:34:50 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:07.767 17:34:50 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83325 00:31:07.767 killing process with pid 83325 00:31:07.767 17:34:50 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:07.767 17:34:50 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:07.767 17:34:50 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83325' 00:31:07.767 17:34:50 ftl -- common/autotest_common.sh@971 -- # kill 83325 00:31:07.767 17:34:50 ftl -- common/autotest_common.sh@976 -- # wait 83325 00:31:09.154 17:34:52 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:09.413 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:09.413 Waiting for block devices as requested 00:31:09.413 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:09.413 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:09.413 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:09.674 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:14.968 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:14.968 Remove shared memory files 00:31:14.968 17:34:57 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:31:14.968 17:34:57 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:14.968 17:34:57 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:31:14.968 17:34:57 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:31:14.968 17:34:57 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:31:14.968 17:34:57 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:14.968 17:34:57 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:31:14.968 ************************************ 00:31:14.968 END TEST ftl 00:31:14.968 ************************************ 00:31:14.968 00:31:14.968 real 16m44.954s 00:31:14.968 user 18m51.645s 00:31:14.968 sys 1m19.222s 00:31:14.968 17:34:57 ftl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:14.968 17:34:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:14.968 17:34:57 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:31:14.968 17:34:57 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:14.968 17:34:57 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:31:14.968 17:34:57 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:14.968 17:34:57 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:31:14.968 17:34:57 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:14.968 17:34:57 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:14.968 17:34:57 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:31:14.968 17:34:57 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:31:14.968 17:34:57 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:31:14.968 17:34:57 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:14.968 17:34:57 -- common/autotest_common.sh@10 -- # set +x 00:31:14.968 17:34:57 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:31:14.968 17:34:57 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:31:14.968 17:34:57 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:31:14.968 17:34:57 -- common/autotest_common.sh@10 -- # set +x 00:31:16.354 INFO: APP EXITING 00:31:16.354 INFO: killing all VMs 00:31:16.354 INFO: killing vhost app 00:31:16.354 INFO: EXIT DONE 00:31:16.354 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:16.923 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:16.923 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:16.923 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:31:16.923 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:31:17.183 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:17.755 Cleaning 00:31:17.755 Removing: /var/run/dpdk/spdk0/config 00:31:17.755 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:17.755 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:17.755 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:17.755 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:17.755 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:17.755 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:17.755 Removing: /var/run/dpdk/spdk0 00:31:17.755 Removing: /var/run/dpdk/spdk_pid56864 00:31:17.755 Removing: /var/run/dpdk/spdk_pid57066 00:31:17.755 Removing: /var/run/dpdk/spdk_pid57273 00:31:17.755 Removing: /var/run/dpdk/spdk_pid57366 00:31:17.755 Removing: /var/run/dpdk/spdk_pid57406 00:31:17.755 Removing: /var/run/dpdk/spdk_pid57528 00:31:17.755 Removing: /var/run/dpdk/spdk_pid57541 00:31:17.755 Removing: /var/run/dpdk/spdk_pid57734 00:31:17.755 Removing: /var/run/dpdk/spdk_pid57821 00:31:17.755 Removing: /var/run/dpdk/spdk_pid57911 00:31:17.755 Removing: /var/run/dpdk/spdk_pid58017 00:31:17.755 Removing: /var/run/dpdk/spdk_pid58103 00:31:17.755 Removing: /var/run/dpdk/spdk_pid58142 00:31:17.755 Removing: /var/run/dpdk/spdk_pid58179 00:31:17.755 Removing: /var/run/dpdk/spdk_pid58254 00:31:17.755 Removing: /var/run/dpdk/spdk_pid58355 00:31:17.755 Removing: /var/run/dpdk/spdk_pid58787 00:31:17.755 Removing: /var/run/dpdk/spdk_pid58840 00:31:17.755 Removing: /var/run/dpdk/spdk_pid58892 00:31:17.755 Removing: /var/run/dpdk/spdk_pid58908 00:31:17.755 Removing: /var/run/dpdk/spdk_pid58999 00:31:17.755 Removing: /var/run/dpdk/spdk_pid59015 00:31:17.755 Removing: /var/run/dpdk/spdk_pid59106 00:31:17.755 Removing: /var/run/dpdk/spdk_pid59122 00:31:17.755 Removing: /var/run/dpdk/spdk_pid59175 00:31:17.755 Removing: /var/run/dpdk/spdk_pid59193 00:31:17.755 Removing: /var/run/dpdk/spdk_pid59246 00:31:17.755 Removing: /var/run/dpdk/spdk_pid59259 00:31:17.755 Removing: /var/run/dpdk/spdk_pid59413 00:31:17.755 Removing: /var/run/dpdk/spdk_pid59444 00:31:17.755 Removing: /var/run/dpdk/spdk_pid59533 00:31:17.755 Removing: /var/run/dpdk/spdk_pid59700 00:31:17.755 Removing: /var/run/dpdk/spdk_pid59778 00:31:17.755 Removing: /var/run/dpdk/spdk_pid59815 00:31:17.755 Removing: /var/run/dpdk/spdk_pid60247 00:31:17.755 Removing: /var/run/dpdk/spdk_pid60345 00:31:17.755 Removing: /var/run/dpdk/spdk_pid60454 00:31:17.755 Removing: /var/run/dpdk/spdk_pid60507 00:31:17.755 Removing: /var/run/dpdk/spdk_pid60527 00:31:17.755 Removing: /var/run/dpdk/spdk_pid60611 00:31:17.755 Removing: /var/run/dpdk/spdk_pid61226 00:31:17.755 Removing: /var/run/dpdk/spdk_pid61263 00:31:17.755 Removing: /var/run/dpdk/spdk_pid61729 00:31:17.755 Removing: /var/run/dpdk/spdk_pid61827 00:31:17.756 Removing: /var/run/dpdk/spdk_pid61936 00:31:17.756 Removing: /var/run/dpdk/spdk_pid61989 00:31:17.756 Removing: /var/run/dpdk/spdk_pid62009 00:31:17.756 Removing: /var/run/dpdk/spdk_pid62040 00:31:17.756 Removing: /var/run/dpdk/spdk_pid63876 00:31:17.756 Removing: /var/run/dpdk/spdk_pid64002 00:31:17.756 Removing: /var/run/dpdk/spdk_pid64006 00:31:17.756 Removing: /var/run/dpdk/spdk_pid64018 00:31:17.756 Removing: /var/run/dpdk/spdk_pid64071 00:31:17.756 Removing: /var/run/dpdk/spdk_pid64075 00:31:17.756 Removing: /var/run/dpdk/spdk_pid64087 00:31:17.756 Removing: /var/run/dpdk/spdk_pid64132 00:31:17.756 Removing: /var/run/dpdk/spdk_pid64135 00:31:17.756 Removing: /var/run/dpdk/spdk_pid64147 00:31:17.756 Removing: /var/run/dpdk/spdk_pid64192 00:31:17.756 Removing: /var/run/dpdk/spdk_pid64196 00:31:17.756 Removing: /var/run/dpdk/spdk_pid64208 00:31:17.756 Removing: /var/run/dpdk/spdk_pid65572 00:31:17.756 Removing: /var/run/dpdk/spdk_pid65669 00:31:17.756 Removing: /var/run/dpdk/spdk_pid67067 00:31:17.756 Removing: /var/run/dpdk/spdk_pid68449 00:31:17.756 Removing: /var/run/dpdk/spdk_pid68538 00:31:17.756 Removing: /var/run/dpdk/spdk_pid68620 00:31:17.756 Removing: /var/run/dpdk/spdk_pid68696 00:31:17.756 Removing: /var/run/dpdk/spdk_pid68806 00:31:17.756 Removing: /var/run/dpdk/spdk_pid68880 00:31:17.756 Removing: /var/run/dpdk/spdk_pid69017 00:31:17.756 Removing: /var/run/dpdk/spdk_pid69375 00:31:17.756 Removing: /var/run/dpdk/spdk_pid69406 00:31:17.756 Removing: /var/run/dpdk/spdk_pid69858 00:31:17.756 Removing: /var/run/dpdk/spdk_pid70043 00:31:17.756 Removing: /var/run/dpdk/spdk_pid70137 00:31:17.756 Removing: /var/run/dpdk/spdk_pid70247 00:31:17.756 Removing: /var/run/dpdk/spdk_pid70300 00:31:17.756 Removing: /var/run/dpdk/spdk_pid70320 00:31:17.756 Removing: /var/run/dpdk/spdk_pid70615 00:31:17.756 Removing: /var/run/dpdk/spdk_pid70670 00:31:17.756 Removing: /var/run/dpdk/spdk_pid70737 00:31:17.756 Removing: /var/run/dpdk/spdk_pid71140 00:31:17.756 Removing: /var/run/dpdk/spdk_pid71291 00:31:17.756 Removing: /var/run/dpdk/spdk_pid72118 00:31:17.756 Removing: /var/run/dpdk/spdk_pid72250 00:31:17.756 Removing: /var/run/dpdk/spdk_pid72445 00:31:17.756 Removing: /var/run/dpdk/spdk_pid72543 00:31:17.756 Removing: /var/run/dpdk/spdk_pid72840 00:31:17.756 Removing: /var/run/dpdk/spdk_pid73110 00:31:17.756 Removing: /var/run/dpdk/spdk_pid73470 00:31:17.756 Removing: /var/run/dpdk/spdk_pid73655 00:31:17.756 Removing: /var/run/dpdk/spdk_pid73798 00:31:17.756 Removing: /var/run/dpdk/spdk_pid73851 00:31:17.756 Removing: /var/run/dpdk/spdk_pid74021 00:31:17.756 Removing: /var/run/dpdk/spdk_pid74052 00:31:17.756 Removing: /var/run/dpdk/spdk_pid74099 00:31:17.756 Removing: /var/run/dpdk/spdk_pid74336 00:31:17.756 Removing: /var/run/dpdk/spdk_pid74583 00:31:17.756 Removing: /var/run/dpdk/spdk_pid75061 00:31:17.756 Removing: /var/run/dpdk/spdk_pid75662 00:31:17.756 Removing: /var/run/dpdk/spdk_pid76157 00:31:17.756 Removing: /var/run/dpdk/spdk_pid76855 00:31:17.756 Removing: /var/run/dpdk/spdk_pid77010 00:31:17.756 Removing: /var/run/dpdk/spdk_pid77086 00:31:17.756 Removing: /var/run/dpdk/spdk_pid77557 00:31:17.756 Removing: /var/run/dpdk/spdk_pid77615 00:31:17.756 Removing: /var/run/dpdk/spdk_pid78309 00:31:17.756 Removing: /var/run/dpdk/spdk_pid78740 00:31:17.756 Removing: /var/run/dpdk/spdk_pid79563 00:31:17.756 Removing: /var/run/dpdk/spdk_pid79701 00:31:18.017 Removing: /var/run/dpdk/spdk_pid79738 00:31:18.017 Removing: /var/run/dpdk/spdk_pid79802 00:31:18.017 Removing: /var/run/dpdk/spdk_pid79858 00:31:18.017 Removing: /var/run/dpdk/spdk_pid79918 00:31:18.017 Removing: /var/run/dpdk/spdk_pid80109 00:31:18.017 Removing: /var/run/dpdk/spdk_pid80204 00:31:18.017 Removing: /var/run/dpdk/spdk_pid80266 00:31:18.017 Removing: /var/run/dpdk/spdk_pid80333 00:31:18.017 Removing: /var/run/dpdk/spdk_pid80377 00:31:18.017 Removing: /var/run/dpdk/spdk_pid80445 00:31:18.017 Removing: /var/run/dpdk/spdk_pid80635 00:31:18.017 Removing: /var/run/dpdk/spdk_pid80862 00:31:18.017 Removing: /var/run/dpdk/spdk_pid81441 00:31:18.017 Removing: /var/run/dpdk/spdk_pid82049 00:31:18.017 Removing: /var/run/dpdk/spdk_pid82678 00:31:18.017 Removing: /var/run/dpdk/spdk_pid83325 00:31:18.017 Clean 00:31:18.017 17:35:00 -- common/autotest_common.sh@1451 -- # return 0 00:31:18.017 17:35:00 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:31:18.017 17:35:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:18.017 17:35:00 -- common/autotest_common.sh@10 -- # set +x 00:31:18.017 17:35:00 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:31:18.017 17:35:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:18.017 17:35:00 -- common/autotest_common.sh@10 -- # set +x 00:31:18.017 17:35:00 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:18.017 17:35:00 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:18.017 17:35:00 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:18.017 17:35:00 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:31:18.017 17:35:00 -- spdk/autotest.sh@394 -- # hostname 00:31:18.017 17:35:00 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:18.278 geninfo: WARNING: invalid characters removed from testname! 00:31:44.870 17:35:25 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:46.787 17:35:29 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:48.771 17:35:31 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:52.078 17:35:34 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:53.993 17:35:36 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:55.906 17:35:38 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:58.450 17:35:40 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:58.450 17:35:40 -- spdk/autorun.sh@1 -- $ timing_finish 00:31:58.450 17:35:40 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:31:58.450 17:35:40 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:58.450 17:35:40 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:31:58.450 17:35:40 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:58.450 + [[ -n 5031 ]] 00:31:58.450 + sudo kill 5031 00:31:58.461 [Pipeline] } 00:31:58.479 [Pipeline] // timeout 00:31:58.485 [Pipeline] } 00:31:58.501 [Pipeline] // stage 00:31:58.507 [Pipeline] } 00:31:58.524 [Pipeline] // catchError 00:31:58.534 [Pipeline] stage 00:31:58.537 [Pipeline] { (Stop VM) 00:31:58.551 [Pipeline] sh 00:31:58.834 + vagrant halt 00:32:01.374 ==> default: Halting domain... 00:32:04.693 [Pipeline] sh 00:32:04.976 + vagrant destroy -f 00:32:07.517 ==> default: Removing domain... 00:32:07.821 [Pipeline] sh 00:32:08.100 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:32:08.110 [Pipeline] } 00:32:08.124 [Pipeline] // stage 00:32:08.128 [Pipeline] } 00:32:08.141 [Pipeline] // dir 00:32:08.146 [Pipeline] } 00:32:08.159 [Pipeline] // wrap 00:32:08.164 [Pipeline] } 00:32:08.175 [Pipeline] // catchError 00:32:08.185 [Pipeline] stage 00:32:08.187 [Pipeline] { (Epilogue) 00:32:08.200 [Pipeline] sh 00:32:08.484 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:13.766 [Pipeline] catchError 00:32:13.768 [Pipeline] { 00:32:13.782 [Pipeline] sh 00:32:14.068 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:14.068 Artifacts sizes are good 00:32:14.079 [Pipeline] } 00:32:14.095 [Pipeline] // catchError 00:32:14.108 [Pipeline] archiveArtifacts 00:32:14.115 Archiving artifacts 00:32:14.251 [Pipeline] cleanWs 00:32:14.266 [WS-CLEANUP] Deleting project workspace... 00:32:14.266 [WS-CLEANUP] Deferred wipeout is used... 00:32:14.295 [WS-CLEANUP] done 00:32:14.297 [Pipeline] } 00:32:14.312 [Pipeline] // stage 00:32:14.317 [Pipeline] } 00:32:14.330 [Pipeline] // node 00:32:14.336 [Pipeline] End of Pipeline 00:32:14.372 Finished: SUCCESS